markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
This is also a good way to check if our procedure was correct. If you look back into the Mathematical Computation section, you will see that the values match! Next, apply dimensionality reduction to the data to obtain the scores.
# Apply dimensionality reduction (Finding the scores) scores_sk = pca.transform(dataset_scale.as_matrix()) print scores_sk
[[ 1.28881571 -2.58539236] [ 1.23529003 -2.63038672] [ 1.29608702 -3.35400166] ..., [-3.39489928 -4.92057914] [-3.45961704 -4.93657062] [-3.35244805 -4.98731342]]
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Last, get the biplot with the values obtained in this section.
plt.figure(figsize=(10,8)) for i in scores_sk: plt.scatter(i[0], i[1], color = 'b') # Assigning the PCs pc1_sk = pca.components_[0] pc2_sk = pca.components_[1] plt.title('Biplot', fontsize=16, fontweight='bold') plt.xlabel('PC1 (40.3%)', fontsize=14, fontweight='bold') plt.ylabel('PC2 (35.1%)', fontsize=14, fontweight='bold') plt.xlim([-5, 3]) plt.ylim([-6, 4]) # Labels for the loadings names = list(data_corr.columns) # Plotting the loadings as vectors for i in range(len(pc1_sk)): plt.arrow(0, 0, pc1_sk[i]*5, pc2_sk[i]*5, color='r', width=0.002, head_width=0.025) plt.text(pc1_sk[i]*5, pc2_sk[i]*5, names[i], color='r')
_____no_output_____
MIT
2016/tutorial_final/79/Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
Fitting the data from a Ramsey experiment In this notebook we analyse data from a Ramsey experiment. Using the method and data from:Watson, T. F., Philips, S. G. J., Kawakami, E., Ward, D. R., Scarlino, P., Veldhorst, M., … Vandersypen, L. M. K. (2018). A programmable two-qubit quantum processor in silicon. Nature, 555(7698), 633–637. https://doi.org/10.1038/nature25766The signal that results from a Ramsey experiment oscillates at a frequency corresponding to the difference between the qubit frequency and the MW source frequency. Therefore, it can be used to accurately calibrate the MW source to be on-resonance with the qubit. Additionally, the decay time of the Ramsey signal corresponds to the free-induction decay or T2* of the qubit.This example takes a Ramsey dataset and uses the core function `qtt.algorithms.functions.fit_gauss_ramsey` to fit it, returning the frequency and decay of the signal.
import numpy as np import matplotlib.pyplot as plt from qtt.algorithms.functions import gauss_ramsey, fit_gauss_ramsey
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Test data, based on the data acquired by Watson et all.
y_data = np.array([0.6019, 0.5242, 0.3619, 0.1888, 0.1969, 0.3461, 0.5276, 0.5361, 0.4261, 0.28 , 0.2323, 0.2992, 0.4373, 0.4803, 0.4438, 0.3392, 0.3061, 0.3161, 0.3976, 0.4246, 0.398 , 0.3757, 0.3615, 0.3723, 0.3803, 0.3873, 0.3873, 0.3561, 0.37 , 0.3819, 0.3834, 0.3838, 0.37 , 0.383 , 0.3573, 0.3869, 0.3838, 0.3792, 0.3757, 0.3815]) total_wait_time = 1.6e-6 x_data = np.linspace(0, total_wait_time, len(y_data))
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Plotting the data:
plt.figure() plt.plot(x_data * 1e6,y_data, '--o') plt.xlabel(r'time ($\mu$s)') plt.ylabel('Q1 spin-up probability') plt.show()
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Applying the `fit_gauss_ramsey` function to fit the data:
par_fit_test, _ = fit_gauss_ramsey(x_data, y_data) freq_fit = abs(par_fit_test[2] * 1e-6) t2star_fit = par_fit_test[1] * 1e6
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Plotting the data and the fit:
test_x = np.linspace(0, total_wait_time, 200) plt.figure() plt.plot(x_data * 1e6, y_data, 'o', label='Data') plt.plot(test_x * 1e6, gauss_ramsey(test_x, par_fit_test), label='Fit') plt.title('Frequency detuning: %.1f MHz / $T_2^*$: %.1f $\mu$s' % (freq_fit, t2star_fit)) plt.xlabel('time ($\mu$s)') plt.ylabel('Spin-up probability') plt.legend() plt.show()
_____no_output_____
MIT
docs/notebooks/analysis/example_fit_ramsey.ipynb
dpfranke/qtt
Plotting geospatial data on a map In this first activity for geoplotlib, you'll combine methodologies learned in the previous exercise and use theoretical knowledge from previous lessons. Besides from wrangling data you need to find the area with given attributes. Before we can start, however, we need to import our dataset. For this activity, we'll work with geo-spatial data that contains all cities with their coordinates and their population.**Note:** This time the dataset is not yet added into the data folder. You have to download it from here: https://www.kaggle.com/max-mind/world-cities-databaseworldcitiespop.csv Loading the dataset
# importing the necessary dependencies import numpy as np import pandas as pd import geoplotlib # loading the Dataset (make sure to have the dataset downloaded)
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** If we import our dataset without defining the dtype of column *Region* as String, we will get a warning telling out the it has a mixed datatype. We can get rid of this warning by explicitly defining the type of the values in this column by using the `dtype` parameter. `dtype={'Region': np.str}`
# looking at the data types of each column
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** Here we can see the dtypes of each column. Since the String type is no primitive datatype, it's displayed as `object` here.
# showing the first 5 entries of the dataset
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
--- Mapping `Latitude` and `Longitude` to `lat` and `lon` Most datasets won't be in the format that you want to have. Some of them might have their latitude and longitude values hidden in a different column. This is where the data wrangling skills of lesson 1 are needed. For the given dataset, the transformations are easy, we simply need to map the `Latitude` and `Longitude` columns into `lat` and `lon` columns which are used by geoplotlib.
# mapping Latitude to lat and Longitude to lon
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** Geoplotlibs methods expect dataset columns `lat` and `lon` for plotting. This means your dataframe has to be tranfsormed to resemble this structure. --- Understanding our data It's your first day at work, your boss hands you this dataset and wants you to dig into it and find the areas with the most adjacent cities that have a population of more than 100k. He needs this information to figure out where to expand next. To get a feeling for how many datapoints the dataset contains, we'll plot the whole dataset using dots.
# plotting the whole dataset with dots
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
Other than seeing the density of our datapoints, we also need to get some information about how the data is distributed.
# amount of countries and cities # amount of cities per country (first 20 entries) # average num of cities per country
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
Since we are only interested in areas with densely placed cities and high population, we can filter out cities without a population. Reducing our data Our dataset has more than 3Mio cities listed. Many of them are really small and can be ignored, given our objective for this activity. We only want to look at those cities that have a value given for their population density.**Note:** If you're having trouble filtering your dataset, you can always check back with the activities in lesson1.
# filter for countries with a population entry (Population > 0) # displaying the first 5 items from dataset_with_pop # showing all cities with a defined population with a dot density plot
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** Not only the execution time of the visualization has been decreased but we already can see where the areas with more cities are. Following the request from our boss, we shall only consider areas that have a high density of adjacent cities with a population of more than 100k.
# dataset with cities with population of >= 100k # displaying all cities >= 100k population with a fixed bounding box (WORLD) in a dot density plot from geoplotlib.utils import BoundingBox
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
**Note:** In order to get the same view on our map every time, we can set the bounding box to the constant viewport declared in the geoplotlib library. We can also instantiate the BoundingBox class with values for north, west, south, and east. --- Finding the best area After reducing our data, we can now use more complex plots to filter down our data even more. Thinking back to the first exercise, we've seen that histograms and voronoi plots can give us a quick visual representation of the density of data.**Note:** Try playing around with the different color maps of the plotting methods, sometimes using other colors does not only improve the visuals but also the amount of information you can take from the visualization.
# using filled voronoi to find dense areas
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
In the voronoi plot we can see tendencies. Germany, Great Britain, Nigeria, India, Japan, Java, the East Coast of the USA, and Brasil stick out. We can now again filter our data and only look at those countries to find the best suited. --- Final call After meeting with your boss, he tells you that we want to stick to Europe when it comes to expanding. Filter your data for Germany and Great Britain only and decide which area is your final proposal.
# filter 100k dataset for cities in Germany and GB # using Delaunay triangulation to find the most dense aree
_____no_output_____
MIT
Lesson05/Activity27/activity27.ipynb
webobite/Data-Visualization-with-Python
Q10. Produce a list of facilities with a total revenue less than 1000.The output of facility name and total revenue, sorted by revenue.
query = """ SELECT sub2.name AS facilityname, sub2.totalrevenue AS totalrevenue FROM ( SELECT sub1.facilityname AS name, SUM(sub1.revenue) AS totalrevenue FROM ( SELECT b.bookid, f.name AS facilityname, CASE WHEN b.memid = 0 THEN (b.slots * f.guestcost) ELSE b.slots * f.membercost END AS Revenue FROM Bookings AS b LEFT JOIN Members AS m ON m.memid = b.memid LEFT JOIN Facilities AS f ON f.facid = b.facid ) AS sub1 GROUP BY sub1.facilityname ) AS sub2 GROUP BY facilityname HAVING totalrevenue < 1000 ORDER BY totalrevenue DESC; """ pd.read_sql_query(query, engine)
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
Q11: Produce a report of members and who recommended them in alphabetic surname,firstname order
query = """ SELECT sub2.memberName AS membername, sub2.recommenderfirstname || ', ' || sub2.recommendersurname AS recommendername FROM ( SELECT sub1.memberName AS memberName, sub1.recommenderId AS memberId, m.firstname AS recommenderfirstname, m.surname AS recommendersurname FROM ( SELECT m2.memid AS memberId, m1.firstname || ', ' || m1.surname AS memberName, m2.recommendedby AS recommenderId FROM Members AS m1 INNER JOIN Members AS m2 ON m1.memid = m2.memid WHERE ( m2.recommendedby IS NOT NULL OR m2.recommendedby <> ' ' OR m2.recommendedby <> '' ) AND m1.memid <> 0 ) AS sub1 LEFT JOIN Members AS m ON sub1.recommenderId = m.memid WHERE m.memid <> 0 ) AS sub2; """ pd.read_sql_query(query, engine)
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
Q12: Find the facilities with their usage by member, but not guests
query = """ SELECT f.name AS facilityname, SUM(b.slots) AS slot_usage FROM Bookings AS b LEFT JOIN Facilities AS f ON f.facid = b.facid LEFT JOIN Members AS m ON m.memid = b.memid WHERE b.memid <> 0 GROUP BY facilityname ORDER BY slot_usage DESC; """ pd.read_sql_query(query, engine)
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
Q13: Find the facilities usage by month, but not guests
query = """ SELECT sub.MONTH AS MONTH, sub.facilityname AS facility, SUM(sub.slotNumber) AS slotusage FROM ( SELECT strftime('%m', starttime) AS MONTH, f.name AS facilityname, b.slots AS slotNumber FROM Bookings AS b LEFT JOIN Facilities AS f ON f.facid = b.facid LEFT JOIN Members AS m ON m.memid = b.memid WHERE b.memid <> 0 ) sub GROUP BY MONTH, facility ORDER BY MONTH, slotusage DESC; """ pd.read_sql_query(query, engine)
_____no_output_____
MIT
SQL Case Study - Country Club/Unit-8.3_SQL-Project.ipynb
shalin4788/Springboard-Do-not-refer-
scikit-learn-svm Credits: Forked from [PyCon 2015 Scikit-learn Tutorial](https://github.com/jakevdp/sklearn_pycon2015) by Jake VanderPlas* Support Vector Machine Classifier* Support Vector Machine with Kernels Classifier
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn; from sklearn.linear_model import LinearRegression from scipy import stats import pylab as pl seaborn.set()
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Support Vector Machine Classifier Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for **classification** or for **regression**. SVMs draw a boundary between clusters of data. SVMs attempt to maximize the margin between sets of points. Many lines can be drawn to separate the points above:
from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60) xfit = np.linspace(-1, 3.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') # Draw three lines that couple separate the data for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]: yfit = m * xfit + b plt.plot(xfit, yfit, '-k') plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4) plt.xlim(-1, 3.5);
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Fit the model:
from sklearn.svm import SVC clf = SVC(kernel='linear') clf.fit(X, y)
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Plot the boundary:
def plot_svc_decision_function(clf, ax=None): """Plot the decision function for a 2D SVC""" if ax is None: ax = plt.gca() x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30) y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30) Y, X = np.meshgrid(y, x) P = np.zeros_like(X) for i, xi in enumerate(x): for j, yj in enumerate(y): P[i, j] = clf.decision_function([xi, yj]) # plot the margins ax.contour(X, Y, P, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--'])
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
In the following plot the dashed lines touch a couple of the points known as *support vectors*, which are stored in the ``support_vectors_`` attribute of the classifier:
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none');
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Use IPython's ``interact`` functionality to explore how the distribution of points affects the support vectors and the discriminative fit:
from IPython.html.widgets import interact def plot_svm(N=100): X, y = make_blobs(n_samples=200, centers=2, random_state=0, cluster_std=0.60) X = X[:N] y = y[:N] clf = SVC(kernel='linear') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plt.xlim(-1, 4) plt.ylim(-1, 6) plot_svc_decision_function(clf, plt.gca()) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none') interact(plot_svm, N=[10, 200], kernel='linear');
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Support Vector Machine with Kernels ClassifierKernels are useful when the decision boundary is not linear. A Kernel is some functional transformation of the input data. SVMs have clever tricks to ensure kernel calculations are efficient. In the example below, a linear boundary is not useful in separating the groups of points:
from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) clf = SVC(kernel='linear').fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf);
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
A simple model that could be useful is a **radial basis function**:
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2)) from mpl_toolkits import mplot3d def plot_3D(elev=30, azim=30): ax = plt.subplot(projection='3d') ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring') ax.view_init(elev=elev, azim=azim) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('r') interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
In three dimensions, there is a clear separation between the data. Run the SVM with the rbf kernel:
clf = SVC(kernel='rbf') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none');
_____no_output_____
Apache-2.0
scikit-learn/scikit-learn-svm.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
Ejercicio Aplicando PCA: Principal Component AnalysisEn este notebook vamos a ver un ejemplo sencillo sobre el uso del PCA. Para ello, utilizaremos un dataset con datos sobre diferentes individuos y un indicador de si está residiendo en una vivienda que ha comprado o lo está haciendo en una de alquiler.Se tratará de un modelo de clasificación, por lo que podremos utilizar uno de los algoritmos de clasificación vistos con anterioridad. Sin embargo, veremos que tenemos un número elevado de variables, que podremos reducirlo gracias al uso de técnicas de reducción de variables, como el PCA. Importamos libreríasAl igual que hemos hecho anteriormente, empezaremos importando las librerías que vamos a utilizar a lo largo del notebook.
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.neighbors import KNeighborsClassifier # Librería nueva para utilizar PCA: from sklearn.decomposition import PCA
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Cargamos datos de entradaLos datos de los individuos con un target que nos indique si está en una vivienda comprada o alquilada, son los siguientes:
dataframe = pd.read_csv(r"comprar_alquilar.csv") dataframe
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Como podemos ver, son datos numéricos, por lo que no tendremos que realizar ningún tipo de conversión de categóricas. Visualicemos las dimensionesUno de los pasos principales que siempre decimos que es conveninete realizar, es el análisis de los datos. Para ello, vamos a analizar las distribuciones de los datos en base al target. EJERCICIO1. Utiliza el dataframe que acabamos de importar para realizar la representación del histograma de cada una de las columnas en base al target, es decir, para cada columna, tendremos que ver superpuestos sus histogramas para los individuos "alquilados" frente a los "comprados": Bien, ya tenemos una primera aproximación a los datos. Sin embargo, podemos seguir analizando los datos para extraer información útil a la hora de entenderlos. Correlaciones de los datosOtro de los puntos interesantes a la hora de analizar los datos puede ser analizar las correlaciones, ya que nos pueden indicar variables similares que estén replicando información o aquellas más importantes en base al target. EJERCICIO1. Representa la matriz de correlación del dataframe mediante un mapa de calor (o ``heatmap``), donde se indique el valor de esta relación. Si analizamos los datos, podemos ver que existe una fuerte relación entre los ingresos y los ahorros, así como entre los propios ingresos y los gastos comunes, los gastos en vivienda o, incluso, el target. EJERCICIORealiza un gráfico de dispersión de las relaciones entre los **ingresos** y:1. Ahorros2. Gastos comunes3. Gatos de vivienda4. Target (comprar)Hazlo todo en la misma figura, con 4 subgráficos.Además, resultaría interesante analizar otros gráficos que puede que no tengan una relación lineal, pero que a priori podrían estar relacionados, como:5. Otros gastos vs. Gastos comunes, donde el target se indique con diferentes colores (que los alquilados sean azules y los comprados rojos, por ejemplo)Este gráfico realízalo en una figura aparte. Normalización y estandarización de los datosDebido a la naturaleza del PCA, donde la magnitud de las variables gobernará la información que nos aporta cada variable, será muy importante mantener los datos en una misma escala. ¿Recuerdas cómo se hacía? EJERCICIONormaliza los datos, de forma que el resultado final tenga una media nula y desviación típica unidad. De este modo, reducimos las variables a unas dimensiones que pueden compararse entre sí.Tras ello, divide los datos en train y test, y aplica un algoritmo KNN para clasificar los datos (con el k que mejor resultado ofrezca). Guarda los resultados en una variable para el futuro: Aplicamos PCATras haber normalizado, podemos hacer uso del algoritmo compresor de variables, PCA, como se indica a continuación. Al aplicar el PCA no reducimos las variables automáticamente, sino que nos creamos nuevas variables que van explicando de más a menos varianza del dataset original. Es decir, la primera variable tras aplicar el PCA será la que mayor información del dataset nos explique, la segunda será la que maximice la información del resto de datos, y así sucesivamente hasta que lleguemos a las últimas, que deberían expresar una cantidad mínima de información, pues ya debería estar toda explicada.Gracias a esto, el paso siguiente sería reducir las variables maximizando la información, lo cual podremos hacer eliminando las últimas variables.Para aplicar el PCA directamente, tenemos 2 opciones:1. Hacerlo matemáticamente, planteando la resolución de un determinante, como ya hicimos en el apartado Feature Engineering2. Aplicar un objeto de sklearnEn este caso, nos quedaremos con la segunda:
pca = PCA(len(X_cols)) pca.fit(X_train_scaled) X_train_scaled_pca = pca.transform(X_train_scaled) X_test_scaled_pca = pca.transform(X_test_scaled) print(X_train_scaled.shape) print(X_train_scaled_pca.shape)
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Varianza explicadaGracias al objeto PCA, se calculan automáticamente ciertos parámetros:
# Varianza explicada (sobre 1): pca.explained_variance_ratio_ # Valores singulares/autovalores: relacionados con la varianza explicada pca.singular_values_ # Autovectores: pca.components_
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
Pasemos a representar ahora esta medida. Para ello, vamos a recurrir a una estructura que vimos hace tiempo:
# A partir de los autovalores, calculamos la varianza explicada var_exp = pca.explained_variance_ratio_*100 cum_var_exp = np.cumsum(pca.explained_variance_ratio_*100) # Representamos en un diagrama de barras la varianza explicada por cada autovalor, y la acumulada plt.figure(figsize=(6, 4)) plt.bar(range(len(pca.explained_variance_ratio_)), var_exp, alpha=0.5, align='center', label='Varianza individual explicada', color='g') plt.step(range(len(pca.explained_variance_ratio_)), cum_var_exp, where='mid', linestyle='--', label='Varianza explicada acumulada') plt.ylabel('Ratio de Varianza Explicada') plt.xlabel('Componentes Principales') plt.legend() # Si queremos obtener cuántas variables necesitamos para cumplir con cierta varianza: umbral_varianza_min = 90 cum_var_exp = np.cumsum(pca.explained_variance_ratio_*100) n_var_90 = len(cum_var_exp[cum_var_exp<umbral_varianza_min]) n_var_90
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
EJERCICIOAhora que tenemos las componentes principales, calcula la correlación entre las nuevas variables entre sí. ¿Tiene sentido lo que sale? Predicción basada en PCAAhora que tenemos calculadas las nuevas varaibles, vamos a proceder a utilizar el algoritmo que habíamos pensado. Lo único que cambiaremos son las varaibles que vamos a utilizar, que ahora serán un subconjunto de las que hemos obtenido con la conversión PCA. Por seguir un poco con lo visto anteriormente, vamos a quedarnos con las variables que hemos visto que nos reducen los datos manteniendo un 90% de su información.Tenemos 2 opciones:1. Seleccionar las n primeras variables de lo que nos devuelve el PCA2. Invocar el PCA con el valor n de las varaibles que queremos
# 1. Seleccionar las n primeras variables de lo que nos devuelve el PCA: X_ejercicio_train = X_train_scaled_pca[:, :n_var_90] X_ejercicio_test = X_test_scaled_pca[:, :n_var_90] # 2. Invocar el PCA con el valor n de las varaibles que queremos pca_b = PCA(n_var_90) X_ejercicio_train_b = pca_b.fit_transform(X_train_scaled) X_ejercicio_test_b = pca_b.transform(X_test_scaled) # Comprobamos que son lo mismo: (np.round(X_ejercicio_train, 4) == np.round(X_ejercicio_train_b, 4)).all()
_____no_output_____
MIT
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
what I want to do manually:amn_groups = { 'AMN_group_"pets friendly"': [ 'AMN_cat(s)', 'AMN_dog(s)', 'AMN_"other pet(s)"', 'AMN_"pets allowed"', 'AMN_"pets live on this property"'], 'AMN_group_"safety measures"': [ 'AMN_"lock on bedroom door"', 'AMN_"safety card"'], 'AMN_group_"winter friendly"': [ 'AMN_"hot tub"', 'AMN_"indoor fireplace"', 'AMN_heating']}amn_grouped_df = amn_df.copy()for group_name, group_members in amn_groups.items(): amn_grouped_df.loc[:, group_name] = amn_df.loc[:, group_members].sum(axis = 1) amn_grouped_df.drop(group_members, axis=1, inplace=True) amn_grouped_df.T
from sklearn.decomposition import PCA pca = PCA(n_components=3) from sklearn.preprocessing import Normalizer nml = Normalizer() amn_pca = pca.fit_transform( nml.fit_transform( amn_df ) ) amn_pca_df = pd.DataFrame(amn_pca) print(amn_pca_df.shape) amn_pca_df.head() amn_pca_df.to_csv('datasets/Asheville/amn_pca.csv', index = False, header=False) amn_df.to_csv('datasets/Asheville/amn.csv', index = False, header=True)
_____no_output_____
OML
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
PCA
from sklearn.decomposition import PCA from sklearn.preprocessing import scale amns = amn_df.as_matrix() print("Scaling the values...") amns_scaled = scale(amns) print("Fit PCA...") pca = PCA(n_components='mle') pca.fit(amns_scaled) print("Cumulative Variance explains...") var1 = np.cumsum(pca.explained_variance_ratio_*100) #The amount of variance that each PC explains print("Plotting...") plt.plot(var1) plt.show()
Scaling the values... Fit PCA... Cumulative Variance explains... Plotting...
OML
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
Lunar Rock ClassficationUsing iamge Augmentation techniques
!pip install -U tensorflow-gpu from google.colab import drive drive.mount('/content/drive',force_remount=True) import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D from tensorflow.keras.preprocessing.image import ImageDataGenerator import pickle import os import numpy as np import pandas as pd import matplotlib.pyplot as plt # Global Flags to control Data & Training/Valdiation download = False validation = False # downloads and extracts, For Local/G-Drive base_url = '/content/drive/My Drive/personal_hackathons/DataSet/' if download : _URL = 'http://hck.re/kkBIfM' path_to_zip = tf.keras.utils.get_file(base_url +'lunar_rock.zip' , origin=_URL, extract=True) PATH = os.path.join(os.path.dirname(path_to_zip), 'lunar_rock') print("Paths to the ZIP File : {}".format(path_to_zip)) else : PATH='/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/' print("Paths to the Data File : {}".format(PATH)) # Unzip Downloaded data if Downloading is required if download : os.chdir(PATH) #change dir !mkdir train #create a directory named train/ !mkdir test #create a directory named test/ !unzip train.zip -d PATH #unzip data in train/ !unzip test.zip -d PATH #unzip data in test/ !unzip sample_submission.csv.zip !unzip train_labels.csv.zip train_dir = os.path.join(PATH, 'train') train_lg_dir = os.path.join(train_dir, 'Large') # directory with our training Large Lunar rock pictures train_sm_dir = os.path.join(train_dir, 'Small') # directory with our training Small Lunar rock pictures print("Paths Train : {} ".format(train_dir)) print("Paths Train Large : {} ".format(train_lg_dir)) print("Paths Train Small: {} ".format(train_sm_dir)) if validation : validation_dir = os.path.join(PATH, 'validation') validation_lg_dir = os.path.join(validation_dir, 'Large') # directory with our Large Lunar rock pictures validation_sm_dir = os.path.join(validation_dir, 'Small') # directory with our Small Lunar rock pictures num_lg_tr = len(os.listdir(train_lg_dir)) num_sm_tr = len(os.listdir(train_sm_dir)) total_train = num_lg_tr + num_sm_tr if validation : num_lg_val = len(os.listdir(validation_lg_dir)) num_sm_val = len(os.listdir(validation_sm_dir)) total_val = num_cats_val + num_dogs_val print('total training Large images:', num_lg_tr) print('total training Small images:', num_sm_tr) print("Total training images:", total_train) print("--") if validation : print('total validation Large images:', num_lg_val) print('total validation Small images:', num_sm_val) print("Total validation images:", total_val)s batch_size = 128 EPOCHS = 24 IMG_HEIGHT = 480 # As the input image is 480 X 720 IMG_WIDTH = 480 # As the input image is 480 X 720 # Evaluate baseline Model def evaluation(model,generator,data = "Training"): print("--------------Evaluating {} Dataset--------------".format(data)) results = model.evaluate_generator(generator=generator,verbose=1) precision=0 recall=0 for name, value in zip(model.metrics_names, results): print(name, ': ', value) if name.strip() == 'precision': precision = value if name.strip() == 'recall': recall = value if precision !=0 and recall!=0 : f1 = (2 * precision * recall)/(precision+recall) print("f1 : ",f1) def plot_metrices(EPOCHS,history,if_val=True): epochs = range(EPOCHS) plt.title('Accuracy') plt.plot(epochs, history.history['accuracy'], color='blue', label='Train') if if_val: plt.plot(epochs, history.history['val_accuracy'], color='orange', label='Val') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() _ = plt.figure() plt.title('Loss') plt.plot(epochs, history.history['loss'], color='blue', label='Train') if if_val: plt.plot(epochs, history.history['val_loss'], color='orange', label='Val') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() _ = plt.figure() plt.title('False Negatives') plt.plot(epochs, history.history['fn'], color='blue', label='Train') if if_val: plt.plot(epochs, history.history['val_fn'], color='orange', label='Val') plt.xlabel('Epoch') plt.ylabel('False Negatives') plt.legend() def plot_confusion_matrix(predict,generator,threshold): # Confusion Matrix labels = generator.classes labels_pred = (predict[:,0] > threshold).astype(np.int) cm = confusion_matrix(labels,labels_pred) plt.matshow(cm, alpha=0) plt.title('Confusion matrix') plt.ylabel('Actual label') plt.xlabel('Predicted label') for (i, j), z in np.ndenumerate(cm): plt.text(j, i, str(z), ha='center', va='center') plt.show() print('Legitimate Customers Detected (True Negatives): ', cm[0][0]) print('Legitimate Customers Incorrectly Detected (False Positives): ', cm[0][1]) print('Loan Deafulters Missed (False Negatives): ', cm[1][0]) print('Loan Deafulters Detected (True Positives): ', cm[1][1]) print('Total Loan Deafulters Customers: ', np.sum(cm[1])) def submission_categorical(model,submission_csv = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/results_01.pickle'): # Instantiate Generator test_datagen = ImageDataGenerator(rescale=1./255) test_dir = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/PATH/' test_generator = test_datagen.flow_from_directory( test_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), # color_mode="rgb", shuffle = False, class_mode='binary', batch_size=batch_size) # Check test Files filenames = test_generator.filenames nb_samples = len(filenames) print("Test Size : {}".format(nb_samples)) # print(filenames) # Model Prediction test_generator.reset() predict = model.predict_generator(test_generator,verbose=1) print("Model Prediction Shape {}".format(predict.shape)) labels = train_data_gen.class_indices predicted_class_indices=np.argmax(predict,axis=1) print("Labels : {}".format(labels) ) print("Class Indices {}".format(predicted_class_indices)) labels = dict((v,k) for k,v in labels.items()) predictions = [labels[k] for k in predicted_class_indices] results=pd.DataFrame({"Image_File":filenames, "Class":predictions}) print("Distribution : {} ".format(results['Class'].value_counts())) # Write Sumission with open(submission_name,'wb') as f : pickle.dump(results,f) return results def submission_binary(model,threshold = 0.4, submission_name='lunar01_m1.pickle'): # Instantiate Generator test_datagen = ImageDataGenerator(rescale=1./255) test_dir = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/PATH/' test_generator = test_datagen.flow_from_directory( test_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), # color_mode="rgb", shuffle = False, class_mode='binary', batch_size=batch_size) # Check test Files filenames = test_generator.filenames nb_samples = len(filenames) print("Test Size : {}".format(nb_samples)) # print(filenames) # Model Prediction test_generator.reset() predict = model.predict_generator(test_generator,verbose=1) print("Model Prediction Shape {}".format(predict.shape)) labels = train_data_gen.class_indices print("Labels : {}".format(labels) ) # Predicting Classes based on Threshold predict_class = predict > threshold predict_class = predict_class.reshape(1,-1) predict_class = predict_class[0] results=pd.DataFrame({"Image_File":filenames, "Class":predict_class}) results['Image_File'] = results['Image_File'].apply(lambda x : x[12:]) results['Class'] = results['Class'].map({True: 'Small', False: "Large"}) print("Distribution : {} ".format(results['Class'].value_counts())) # Write Sumission with open(submission_name,'wb') as f : pickle.dump(results,f) return results # When using whole dataset for training train_image_generator = ImageDataGenerator(validation_split=0.2, rotation_range=45, horizontal_flip=True, zoom_range=0.5, rescale=1./255) # Generator for our training data train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size, directory=train_dir, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary', ) # While Splitting into Train & Validation # train_image_generator = ImageDataGenerator(validation_split=0.2, # rotation_range=45, # horizontal_flip=True, # zoom_range=0.5, # rescale=1./255) # Generator for our training data # train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size, # directory=train_dir, # shuffle=True, # target_size=(IMG_HEIGHT, IMG_WIDTH), # class_mode='binary', # subset='training') # val_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size, # directory=train_dir, # shuffle=False, # target_size=(IMG_HEIGHT, IMG_WIDTH), # class_mode='binary', # subset='validation') # Only to be used When we have a different Validation Set if validation : validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size, directory=validation_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), class_mode='binary') sample_training_images, sample_training_labels = next(train_data_gen) sample_training_labels[:5] # This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column. def plotImages(images_arr): fig, axes = plt.subplots(1, 5, figsize=(20,20)) axes = axes.flatten() for img, ax in zip( images_arr, axes): ax.imshow(img) ax.axis('off') plt.tight_layout() plt.show() plotImages(sample_training_images[:5]) def model_metrics(): metrics = [ keras.metrics.Accuracy(name='accuracy'), keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc') ] return metrics metrics = model_metrics() def make_model1(metrics=metrics): model = Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)), MaxPooling2D(), Conv2D(32, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Flatten(), Dense(512, activation='relu'), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=metrics) model.summary() return model def make_model2(metrics=metrics): model = Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)), MaxPooling2D(), Dropout(0.2), Conv2D(32, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Dropout(0.2), Flatten(), Dense(512, activation='relu'), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=metrics) model.summary() return model model = make_model2() history = model.fit_generator( train_data_gen, # steps_per_epoch=total_train // batch_size, epochs=EPOCHS, # validation_data=val_data_gen, # validation_steps=total_val // batch_size ) s# Save the entire model to a HDF5 file. # The '.h5' extension indicates that the model shuold be saved to HDF5. model.save(PATH+'my_model02_m2.h5') evaluation(model,train_data_gen,data = "Training") # evaluation(model,val_data_gen,data = "Validation") plot_metrices(EPOCHS,history,if_val=False) generator = train_data_gen predict = model.predict_generator(train_data_gen,verbose=1) plot_confusion_matrix(predict=predict,generator=train_data_gen,threshold=0.2) sub = submission_binary(model,threshold=0.4,submission_name=PATH+'lunar01_m2.pickle')
_____no_output_____
MIT
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/machine_learning_with_Scikit_Learn_and_TensorFlow
Evalaution Prediction to determine Threshold. submission_binary function is written in different cells below
filenames=test_generator.filenames results=pd.DataFrame({"Image_File":filenames, "Class":predict_class}) results['Image_File'] = results['Image_File'].apply(lambda x : x[12:]) # # results['Class'] = results[results.Score == True ] results['Class'] = results['Class'].map({True: 'Small', False: "Large"}) results['Class'].value_counts() # Instantiate Generator test_datagen = ImageDataGenerator(rescale=1./255) test_dir = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/PATH/' test_generator = test_datagen.flow_from_directory( test_dir, target_size=(IMG_HEIGHT, IMG_WIDTH), # color_mode="rgb", shuffle = False, class_mode='binary', batch_size=batch_size) # Check test Files filenames = test_generator.filenames nb_samples = len(filenames) print("Test Size : {}".format(nb_samples)) # print(filenames) # Model Prediction test_generator.reset() predict = model.predict_generator(test_generator,verbose=1) print("Model Prediction Shape {}".format(predict.shape)) labels = train_data_gen.class_indices print("Labels : {}".format(labels) ) threshold = 0.8 # Predicting Classes based on Threshold predict_class = predict > threshold predict_class = predict_class.reshape(1,-1) predict_class = predict_class[0] results=pd.DataFrame({"Image_File":filenames, "Class":predict_class}) results['Image_File'] = results['Image_File'].apply(lambda x : x[12:]) results['Class'] = results['Class'].map({True: 'Small', False: "Large"}) print("Distribution : {} ".format(results['Class'].value_counts())) # # Write Sumission # with open(submission_name,'wb') as f : # pickle.dump(results,f)
Distribution : Large 3772 Small 3762 Name: Class, dtype: int64
MIT
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/machine_learning_with_Scikit_Learn_and_TensorFlow
Db2 Connection Document This notebook contains the connect statement that will be used for connecting to Db2. The typical way of connecting to Db2 within a notebooks it to run the db2 notebook (`db2.ipynb`) and then issue the `%sql connect` statement:```sql%run db2.ipynb%sql connect to sample user ...```Rather than having to change the connect statement in every notebook, this one file can be changed and all of the other notebooks will use the value in here. Note that if you do reset a connection within a notebook, you will need to issue the `CONNECT` command again or run this notebook to re-connect.The `db2.ipynb` file is still used at the beginning of all notebooks to highlight the fact that we are using special code to allow Db2 commands to be issues from within Jupyter Notebooks. Connect to Db2This code will connect to Db2 locally.
%sql CONNECT TO SAMPLE USER DB2INST1 USING db2inst1 HOST 10.0.0.2 PORT 50000
_____no_output_____
Apache-2.0
connection.ipynb
Db2-DTE-POC/Db2-Click-To-Containerize-Lab
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Hello, many worlds View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how a classical neural network can learn to correct qubit calibration errors. It introduces Cirq, a Python framework to create, edit, and invoke Noisy Intermediate Scale Quantum (NISQ) circuits, and demonstrates how Cirq interfaces with TensorFlow Quantum. Setup
!pip install tensorflow==2.4.1
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Install TensorFlow Quantum:
!pip install tensorflow-quantum # Update package resources to account for version changes. import importlib, pkg_resources importlib.reload(pkg_resources)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Now import TensorFlow and the module dependencies:
import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
1. The Basics 1.1 Cirq and parameterized quantum circuitsBefore exploring TensorFlow Quantum (TFQ), let's look at some Cirq basics. Cirq is a Python library for quantum computing from Google. You use it to define circuits, including static and parameterized gates.Cirq uses SymPy symbols to represent free parameters.
a, b = sympy.symbols('a b')
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The following code creates a two-qubit circuit using your parameters:
# Create two qubits q0, q1 = cirq.GridQubit.rect(1, 2) # Create a circuit on these qubits using the parameters you created above. circuit = cirq.Circuit( cirq.rx(a).on(q0), cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1)) SVGCircuit(circuit)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
To evaluate circuits, you can use the `cirq.Simulator` interface. You replace free parameters in a circuit with specific numbers by passing in a `cirq.ParamResolver` object. The following code calculates the raw state vector output of your parameterized circuit:
# Calculate a state vector with a=0.5 and b=-0.5. resolver = cirq.ParamResolver({a: 0.5, b: -0.5}) output_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector output_state_vector
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
State vectors are not directly accessible outside of simulation (notice the complex numbers in the output above). To be physically realistic, you must specify a measurement, which converts a state vector into a real number that classical computers can understand. Cirq specifies measurements using combinations of the Pauli operators $\hat{X}$, $\hat{Y}$, and $\hat{Z}$. As illustration, the following code measures $\hat{Z}_0$ and $\frac{1}{2}\hat{Z}_0 + \hat{X}_1$ on the state vector you just simulated:
z0 = cirq.Z(q0) qubit_map={q0: 0, q1: 1} z0.expectation_from_state_vector(output_state_vector, qubit_map).real z0x1 = 0.5 * z0 + cirq.X(q1) z0x1.expectation_from_state_vector(output_state_vector, qubit_map).real
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
1.2 Quantum circuits as tensorsTensorFlow Quantum (TFQ) provides `tfq.convert_to_tensor`, a function that converts Cirq objects into tensors. This allows you to send Cirq objects to our quantum layers and quantum ops. The function can be called on lists or arrays of Cirq Circuits and Cirq Paulis:
# Rank 1 tensor containing 1 circuit. circuit_tensor = tfq.convert_to_tensor([circuit]) print(circuit_tensor.shape) print(circuit_tensor.dtype)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
This encodes the Cirq objects as `tf.string` tensors that `tfq` operations decode as needed.
# Rank 1 tensor containing 2 Pauli operators. pauli_tensor = tfq.convert_to_tensor([z0, z0x1]) pauli_tensor.shape
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
1.3 Batching circuit simulationTFQ provides methods for computing expectation values, samples, and state vectors. For now, let's focus on *expectation values*.The highest-level interface for calculating expectation values is the `tfq.layers.Expectation` layer, which is a `tf.keras.Layer`. In its simplest form, this layer is equivalent to simulating a parameterized circuit over many `cirq.ParamResolvers`; however, TFQ allows batching following TensorFlow semantics, and circuits are simulated using efficient C++ code.Create a batch of values to substitute for our `a` and `b` parameters:
batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Batching circuit execution over parameter values in Cirq requires a loop:
cirq_results = [] cirq_simulator = cirq.Simulator() for vals in batch_vals: resolver = cirq.ParamResolver({a: vals[0], b: vals[1]}) final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector cirq_results.append( [z0.expectation_from_state_vector(final_state_vector, { q0: 0, q1: 1 }).real]) print('cirq batch results: \n {}'.format(np.array(cirq_results)))
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The same operation is simplified in TFQ:
tfq.layers.Expectation()(circuit, symbol_names=[a, b], symbol_values=batch_vals, operators=z0)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2. Hybrid quantum-classical optimizationNow that you've seen the basics, let's use TensorFlow Quantum to construct a *hybrid quantum-classical neural net*. You will train a classical neural net to control a single qubit. The control will be optimized to correctly prepare the qubit in the `0` or `1` state, overcoming a simulated systematic calibration error. This figure shows the architecture:Even without a neural network this is a straightforward problem to solve, but the theme is similar to the real quantum control problems you might solve using TFQ. It demonstrates an end-to-end example of a quantum-classical computation using the `tfq.layers.ControlledPQC` (Parametrized Quantum Circuit) layer inside of a `tf.keras.Model`. For the implementation of this tutorial, this architecture is split into 3 parts:- The *input circuit* or *datapoint circuit*: The first three $R$ gates.- The *controlled circuit*: The other three $R$ gates.- The *controller*: The classical neural-network setting the parameters of the controlled circuit. 2.1 The controlled circuit definitionDefine a learnable single bit rotation, as indicated in the figure above. This will correspond to our controlled circuit.
# Parameters that the classical NN will feed values into. control_params = sympy.symbols('theta_1 theta_2 theta_3') # Create the parameterized circuit. qubit = cirq.GridQubit(0, 0) model_circuit = cirq.Circuit( cirq.rz(control_params[0])(qubit), cirq.ry(control_params[1])(qubit), cirq.rx(control_params[2])(qubit)) SVGCircuit(model_circuit)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2.2 The controllerNow define controller network:
# The classical neural network layers. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Given a batch of commands, the controller outputs a batch of control signals for the controlled circuit. The controller is randomly initialized so these outputs are not useful, yet.
controller(tf.constant([[0.0],[1.0]])).numpy()
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2.3 Connect the controller to the circuit Use `tfq` to connect the controller to the controlled circuit, as a single `keras.Model`. See the [Keras Functional API guide](https://www.tensorflow.org/guide/keras/functional) for more about this style of model definition.First define the inputs to the model:
# This input is the simulated miscalibration that the model will learn to correct. circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.string, name='circuits_input') # Commands will be either `0` or `1`, specifying the state to set the qubit to. commands_input = tf.keras.Input(shape=(1,), dtype=tf.dtypes.float32, name='commands_input')
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Next apply operations to those inputs, to define the computation.
dense_2 = controller(commands_input) # TFQ layer for classically controlled circuits. expectation_layer = tfq.layers.ControlledPQC(model_circuit, # Observe Z operators = cirq.Z(qubit)) expectation = expectation_layer([circuits_input, dense_2])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Now package this computation as a `tf.keras.Model`:
# The full Keras model is built from our layers. model = tf.keras.Model(inputs=[circuits_input, commands_input], outputs=expectation)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The network architecture is indicated by the plot of the model below.Compare this model plot to the architecture diagram to verify correctness.Note: May require a system install of the `graphviz` package.
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
This model takes two inputs: The commands for the controller, and the input-circuit whose output the controller is attempting to correct. 2.4 The dataset The model attempts to output the correct correct measurement value of $\hat{Z}$ for each command. The commands and correct values are defined below.
# The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired Z expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
This is not the entire training dataset for this task. Each datapoint in the dataset also needs an input circuit. 2.4 Input circuit definitionThe input-circuit below defines the random miscalibration the model will learn to correct.
random_rotations = np.random.uniform(0, 2 * np.pi, 3) noisy_preparation = cirq.Circuit( cirq.rx(random_rotations[0])(qubit), cirq.ry(random_rotations[1])(qubit), cirq.rz(random_rotations[2])(qubit) ) datapoint_circuits = tfq.convert_to_tensor([ noisy_preparation ] * 2) # Make two copied of this circuit
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
There are two copies of the circuit, one for each datapoint.
datapoint_circuits.shape
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
2.5 Training With the inputs defined you can test-run the `tfq` model.
model([datapoint_circuits, commands]).numpy()
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Now run a standard training process to adjust these values towards the `expected_outputs`.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() model.compile(optimizer=optimizer, loss=loss) history = model.fit(x=[datapoint_circuits, commands], y=expected_outputs, epochs=30, verbose=0) plt.plot(history.history['loss']) plt.title("Learning to Control a Qubit") plt.xlabel("Iterations") plt.ylabel("Error in Control") plt.show()
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
From this plot you can see that the neural network has learned to overcome the systematic miscalibration. 2.6 Verify outputsNow use the trained model, to correct the qubit calibration errors. With Cirq:
def check_error(command_values, desired_values): """Based on the value in `command_value` see how well you could prepare the full circuit to have `desired_value` when taking expectation w.r.t. Z.""" params_to_prepare_output = controller(command_values).numpy() full_circuit = noisy_preparation + model_circuit # Test how well you can prepare a state to get expectation the expectation # value in `desired_values` for index in [0, 1]: state = cirq_simulator.simulate( full_circuit, {s:v for (s,v) in zip(control_params, params_to_prepare_output[index])} ).final_state_vector expt = cirq.Z(qubit).expectation_from_state_vector(state, {qubit: 0}).real print(f'For a desired output (expectation) of {desired_values[index]} with' f' noisy preparation, the controller\nnetwork found the following ' f'values for theta: {params_to_prepare_output[index]}\nWhich gives an' f' actual expectation of: {expt}\n') check_error(commands, expected_outputs)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The value of the loss function during training provides a rough idea of how well the model is learning. The lower the loss, the closer the expectation values in the above cell is to `desired_values`. If you aren't as concerned with the parameter values, you can always check the outputs from above using `tfq`:
model([datapoint_circuits, commands])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
3 Learning to prepare eigenstates of different operatorsThe choice of the $\pm \hat{Z}$ eigenstates corresponding to 1 and 0 was arbitrary. You could have just as easily wanted 1 to correspond to the $+ \hat{Z}$ eigenstate and 0 to correspond to the $-\hat{X}$ eigenstate. One way to accomplish this is by specifying a different measurement operator for each command, as indicated in the figure below:This requires use of tfq.layers.Expectation. Now your input has grown to include three objects: circuit, command, and operator. The output is still the expectation value. 3.1 New model definitionLets take a look at the model to accomplish this task:
# Define inputs. commands_input = tf.keras.layers.Input(shape=(1), dtype=tf.dtypes.float32, name='commands_input') circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.dtypes.string, name='circuits_input') operators_input = tf.keras.Input(shape=(1,), dtype=tf.dtypes.string, name='operators_input')
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Here is the controller network:
# Define classical NN. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
Combine the circuit and the controller into a single `keras.Model` using `tfq`:
dense_2 = controller(commands_input) # Since you aren't using a PQC or ControlledPQC you must append # your model circuit onto the datapoint circuit tensor manually. full_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit) expectation_output = tfq.layers.Expectation()(full_circuit, symbol_names=control_params, symbol_values=dense_2, operators=operators_input) # Contruct your Keras model. two_axis_control_model = tf.keras.Model( inputs=[circuits_input, commands_input, operators_input], outputs=[expectation_output])
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
3.2 The datasetNow you will also include the operators you wish to measure for each datapoint you supply for `model_circuit`:
# The operators to measure, for each command. operator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]]) # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32)
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
3.3 TrainingNow that you have your new inputs and outputs you can train once again using keras.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() two_axis_control_model.compile(optimizer=optimizer, loss=loss) history = two_axis_control_model.fit( x=[datapoint_circuits, commands, operator_data], y=expected_outputs, epochs=30, verbose=1) plt.plot(history.history['loss']) plt.title("Learning to Control a Qubit") plt.xlabel("Iterations") plt.ylabel("Error in Control") plt.show()
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
The loss function has dropped to zero. The `controller` is available as a stand-alone model. Call the controller, and check its response to each command signal. It would take some work to correctly compare these outputs to the contents of `random_rotations`.
controller.predict(np.array([0,1]))
_____no_output_____
Apache-2.0
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
A Simple Neural Network from Scratch with PyTorch and Google Colab In this tutorial we will implement a simple neural network from scratch using PyTorch. The idea of the tutorial is to teach you the basics of PyTorch and how it can be used to implement a neural network from scratch. I will go over some of the basic functionalities and concepts available in PyTorch that will allow you to build your own neural networks. This tutorial assumes you have prior knowledge of how a neural network works. Don’t worry! Even if you are not so sure, you will be okay. For advanced PyTorch users, this tutorial may still serve as a refresher. This tutorial is heavily inspired by this [Neural Network implementation](https://repl.it/talk/announcements/Build-a-Neural-Network-in-Python/5457) coded purely using Numpy. In fact, I tried re-implementing the code using PyTorch instead and added my own intuitions and explanations. Thanks to [Samay](https://repl.it/@shamdasani) for his phenomenal work, I hope this inspires many others as it did with me.Since we are working on Google Colab, we will need to install the PyTorch library. You can do this by using the following command:
!pip3 install torch torchvision
Collecting torch [?25l Downloading https://files.pythonhosted.org/packages/7e/60/66415660aa46b23b5e1b72bc762e816736ce8d7260213e22365af51e8f9c/torch-1.0.0-cp36-cp36m-manylinux1_x86_64.whl (591.8MB)  100% |████████████████████████████████| 591.8MB 26kB/s tcmalloc: large alloc 1073750016 bytes == 0x61f82000 @ 0x7f400bb202a4 0x591a07 0x5b5d56 0x502e9a 0x506859 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x504c28 0x502540 0x502f3d 0x507641 [?25hCollecting torchvision [?25l Downloading https://files.pythonhosted.org/packages/ca/0d/f00b2885711e08bd71242ebe7b96561e6f6d01fdb4b9dcf4d37e2e13c5e1/torchvision-0.2.1-py2.py3-none-any.whl (54kB)  100% |████████████████████████████████| 61kB 23.4MB/s [?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.14.6) Collecting pillow>=4.1.1 (from torchvision) [?25l Downloading https://files.pythonhosted.org/packages/92/e3/217dfd0834a51418c602c96b110059c477260c7fee898542b100913947cf/Pillow-5.4.0-cp36-cp36m-manylinux1_x86_64.whl (2.0MB)  100% |████████████████████████████████| 2.0MB 6.8MB/s [?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.11.0) Installing collected packages: torch, pillow, torchvision Found existing installation: Pillow 4.0.0 Uninstalling Pillow-4.0.0: Successfully uninstalled Pillow-4.0.0 Successfully installed pillow-5.4.0 torch-1.0.0 torchvision-0.2.1
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
The `torch` module provides all the necessary **tensor** operators you will need to implement your first neural network from scratch in PyTorch. That's right! In PyTorch everything is a Tensor, so this is the first thing you will need to get used to.
import torch import torch.nn as nn
_____no_output_____
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
DataLet's start by creating some sample data using the `torch.tensor` command. In Numpy, this could be done with `np.array`. Both functions serve the same purpose, but in PyTorch everything is a Tensor as opposed to a vector or matrix. We define types in PyTorch using the `dtype=torch.xxx` command. In the data below, `X` represents the amount of hours studied and how much time students spent sleeping, whereas `y` represent grades. The variable `xPredicted` is a single input for which we want to predict a grade using the parameters learned by the neural network. Remember, the neural network wants to learn a mapping between `X` and `y`, so it will try to take a guess from what it has learned from the training data.
X = torch.tensor(([2, 9], [1, 5], [3, 6]), dtype=torch.float) # 3 X 2 tensor y = torch.tensor(([92], [100], [89]), dtype=torch.float) # 3 X 1 tensor xPredicted = torch.tensor(([4, 8]), dtype=torch.float) # 1 X 2 tensor
_____no_output_____
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
You can check the size of the tensors we have just created with the `size` command. This is equivalent to the `shape` command used in tools such as Numpy and Tensorflow.
print(X.size()) print(y.size())
torch.Size([3, 2]) torch.Size([3, 1])
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
ScalingBelow we are performing some scaling on the sample data. Notice that the `max` function returns both a tensor and the corresponding indices. So we use `_` to capture the indices which we won't use here because we are only interested in the max values to conduct the scaling. Perfect! Our data is now in a very nice format our neural network will appreciate later on.
# scale units X_max, _ = torch.max(X, 0) xPredicted_max, _ = torch.max(xPredicted, 0) X = torch.div(X, X_max) xPredicted = torch.div(xPredicted, xPredicted_max) y = y / 100 # max test score is 100 print(xPredicted)
tensor([0.5000, 1.0000])
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
Notice that there are two functions `max` and `div` that I didn't discuss above. They do exactly what they imply: `max` finds the maximum value in a vector... I mean tensor; and `div` is basically a nice little function to divide two tensors. Model (Computation Graph)Once the data has been processed and it is in the proper format, all you need to do now is to define your model. Here is where things begin to change a little as compared to how you would build your neural networks using, say, something like Keras or Tensorflow. However, you will realize quickly as you go along that PyTorch doesn't differ much from other deep learning tools. At the end of the day we are constructing a computation graph, which is used to dictate how data should flow and what type of operations are performed on this information. For illustration purposes, we are building the following neural network or computation graph:![alt text](https://drive.google.com/uc?export=view&id=1l-sKpcCJCEUJV1BlAqcVAvLXLpYCInV6)
class Neural_Network(nn.Module): def __init__(self, ): super(Neural_Network, self).__init__() # parameters # TODO: parameters can be parameterized instead of declaring them here self.inputSize = 2 self.outputSize = 1 self.hiddenSize = 3 # weights self.W1 = torch.randn(self.inputSize, self.hiddenSize) # 3 X 2 tensor self.W2 = torch.randn(self.hiddenSize, self.outputSize) # 3 X 1 tensor def forward(self, X): self.z = torch.matmul(X, self.W1) # 3 X 3 ".dot" does not broadcast in PyTorch self.z2 = self.sigmoid(self.z) # activation function self.z3 = torch.matmul(self.z2, self.W2) o = self.sigmoid(self.z3) # final activation function return o def sigmoid(self, s): return 1 / (1 + torch.exp(-s)) def sigmoidPrime(self, s): # derivative of sigmoid return s * (1 - s) def backward(self, X, y, o): self.o_error = y - o # error in output self.o_delta = self.o_error * self.sigmoidPrime(o) # derivative of sig to error self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2)) self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2) self.W1 += torch.matmul(torch.t(X), self.z2_delta) self.W2 += torch.matmul(torch.t(self.z2), self.o_delta) def train(self, X, y): # forward + backward pass for training o = self.forward(X) self.backward(X, y, o) def saveWeights(self, model): # we will use the PyTorch internal storage functions torch.save(model, "NN") # you can reload model with all the weights and so forth with: # torch.load("NN") def predict(self): print ("Predicted data based on trained weights: ") print ("Input (scaled): \n" + str(xPredicted)) print ("Output: \n" + str(self.forward(xPredicted)))
_____no_output_____
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
For the purpose of this tutorial, we are not going to be talking math stuff, that's for another day. I just want you to get a gist of what it takes to build a neural network from scratch using PyTorch. Let's break down the model which was declared via the class above. Class HeaderFirst, we defined our model via a class because that is the recommended way to build the computation graph. The class header contains the name of the class `Neural Network` and the parameter `nn.Module` which basically indicates that we are defining our own neural network. ```pythonclass Neural_Network(nn.Module):``` InitializationThe next step is to define the initializations ( `def __init__(self,)`) that will be performed upon creating an instance of the customized neural network. You can declare the parameters of your model here, but typically, you would declare the structure of your network in this section -- the size of the hidden layers and so forth. Since we are building the neural network from scratch, we explicitly declared the size of the weights matrices: one that stores the parameters from the input to hidden layer; and one that stores the parameter from the hidden to output layer. Both weight matrices are initialized with values randomly chosen from a normal distribution via `torch.randn(...)`. Note that we are not using bias just to keep things as simple as possible. ```pythondef __init__(self, ): super(Neural_Network, self).__init__() parameters TODO: parameters can be parameterized instead of declaring them here self.inputSize = 2 self.outputSize = 1 self.hiddenSize = 3 weights self.W1 = torch.randn(self.inputSize, self.hiddenSize) 3 X 2 tensor self.W2 = torch.randn(self.hiddenSize, self.outputSize) 3 X 1 tensor``` The Forward FunctionThe `forward` function is where all the magic happens (see below). This is where the data enters and is fed into the computation graph (i.e., the neural network structure we have built). Since we are building a simple neural network with one hidden layer, our forward function looks very simple:```pythondef forward(self, X): self.z = torch.matmul(X, self.W1) self.z2 = self.sigmoid(self.z) activation function self.z3 = torch.matmul(self.z2, self.W2) o = self.sigmoid(self.z3) final activation function return o```The `forward` function above takes the input `X`and then performs a matrix multiplication (`torch.matmul(...)`) with the first weight matrix `self.W1`. Then the result is applied an activation function, `sigmoid`. The resulting matrix of the activation is then multiplied with the second weight matrix `self.W2`. Then another activation if performed, which renders the output of the neural network or computation graph. The process I described above is simply what's known as a `feedforward pass`. In order for the weights to optimize when training, we need a backpropagation algorithm. The Backward FunctionThe `backward` function contains the backpropagation algorithm, where the goal is to essentially minimize the loss with respect to our weights. In other words, the weights need to be updated in such a way that the loss decreases while the neural network is training (well, that is what we hope for). All this magic is possible with the gradient descent algorithm which is declared in the `backward` function. Take a minute or two to inspect what is happening in the code below:```pythondef backward(self, X, y, o): self.o_error = y - o error in output self.o_delta = self.o_error * self.sigmoidPrime(o) self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2)) self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2) self.W1 += torch.matmul(torch.t(X), self.z2_delta) self.W2 += torch.matmul(torch.t(self.z2), self.o_delta)```Notice that we are performing a lot of matrix multiplications along with the transpose operations via the `torch.matmul(...)` and `torch.t(...)` operations, respectively. The rest is simply gradient descent -- there is nothing to it. TrainingAll that is left now is to train the neural network. First we create an instance of the computation graph we have just built:```pythonNN = Neural_Network()```Then we train the model for `1000` rounds. Notice that in PyTorch `NN(X)` automatically calls the `forward` function so there is no need to explicitly call `NN.forward(X)`. After we have obtained the predicted output for ever round of training, we compute the loss, with the following code:```pythontorch.mean((y - NN(X))**2).detach().item()```The next step is to start the training (foward + backward) via `NN.train(X, y)`. After we have trained the neural network, we can store the model and output the predicted value of the single instance we declared in the beginning, `xPredicted`. Let's train!
NN = Neural_Network() for i in range(1000): # trains the NN 1,000 times print ("#" + str(i) + " Loss: " + str(torch.mean((y - NN(X))**2).detach().item())) # mean sum squared loss NN.train(X, y) NN.saveWeights(NN) NN.predict()
#0 Loss: 0.28770461678504944 #1 Loss: 0.19437099993228912 #2 Loss: 0.129642054438591 #3 Loss: 0.08898762613534927 #4 Loss: 0.0638350322842598 #5 Loss: 0.04783045873045921 #6 Loss: 0.037219222635030746 #7 Loss: 0.029889358207583427 #8 Loss: 0.024637090042233467 #9 Loss: 0.020752854645252228 #10 Loss: 0.01780204102396965 #11 Loss: 0.015508432872593403 #12 Loss: 0.013690348714590073 #13 Loss: 0.012224685400724411 #14 Loss: 0.011025689542293549 #15 Loss: 0.0100322300568223 #16 Loss: 0.009199750609695911 #17 Loss: 0.008495191112160683 #18 Loss: 0.007893583737313747 #19 Loss: 0.007375772576779127 #20 Loss: 0.006926907692104578 #21 Loss: 0.006535270716995001 #22 Loss: 0.006191555876284838 #23 Loss: 0.005888286512345076 #24 Loss: 0.005619380157440901 #25 Loss: 0.0053798723965883255 #26 Loss: 0.005165652371942997 #27 Loss: 0.004973314236849546 #28 Loss: 0.0048000202514231205 #29 Loss: 0.004643348511308432 #30 Loss: 0.00450127711519599 #31 Loss: 0.004372074268758297 #32 Loss: 0.004254247527569532 #33 Loss: 0.004146536346524954 #34 Loss: 0.004047831054776907 #35 Loss: 0.003957169130444527 #36 Loss: 0.0038737261202186346 #37 Loss: 0.0037967758253216743 #38 Loss: 0.0037256714422255754 #39 Loss: 0.0036598537117242813 #40 Loss: 0.003598827635869384 #41 Loss: 0.0035421468783169985 #42 Loss: 0.0034894247073680162 #43 Loss: 0.003440307453274727 #44 Loss: 0.0033944963943213224 #45 Loss: 0.003351695602759719 #46 Loss: 0.003311669686809182 #47 Loss: 0.003274182789027691 #48 Loss: 0.0032390293199568987 #49 Loss: 0.0032060390803962946 #50 Loss: 0.0031750358175486326 #51 Loss: 0.0031458677258342505 #52 Loss: 0.003118406282737851 #53 Loss: 0.0030925225000828505 #54 Loss: 0.0030680971685796976 #55 Loss: 0.0030450366903096437 #56 Loss: 0.003023233264684677 #57 Loss: 0.0030026088934391737 #58 Loss: 0.002983089303597808 #59 Loss: 0.0029645822942256927 #60 Loss: 0.00294703827239573 #61 Loss: 0.0029303862247616053 #62 Loss: 0.002914572134613991 #63 Loss: 0.0028995368629693985 #64 Loss: 0.0028852447867393494 #65 Loss: 0.002871639095246792 #66 Loss: 0.002858673455193639 #67 Loss: 0.0028463276103138924 #68 Loss: 0.0028345445170998573 #69 Loss: 0.0028233081102371216 #70 Loss: 0.0028125671669840813 #71 Loss: 0.002802313072606921 #72 Loss: 0.0027925113681703806 #73 Loss: 0.002783131552860141 #74 Loss: 0.0027741591911762953 #75 Loss: 0.00276556215249002 #76 Loss: 0.0027573201805353165 #77 Loss: 0.002749415347352624 #78 Loss: 0.002741842297837138 #79 Loss: 0.0027345670387148857 #80 Loss: 0.00272758468054235 #81 Loss: 0.0027208721730858088 #82 Loss: 0.002714422531425953 #83 Loss: 0.002708215033635497 #84 Loss: 0.0027022461872547865 #85 Loss: 0.0026964957360178232 #86 Loss: 0.002690958557650447 #87 Loss: 0.0026856244076043367 #88 Loss: 0.002680474892258644 #89 Loss: 0.002675510011613369 #90 Loss: 0.002670713933184743 #91 Loss: 0.0026660896837711334 #92 Loss: 0.0026616165414452553 #93 Loss: 0.0026572979986667633 #94 Loss: 0.0026531198527663946 #95 Loss: 0.002649075584486127 #96 Loss: 0.002645164029672742 #97 Loss: 0.0026413705199956894 #98 Loss: 0.0026377029716968536 #99 Loss: 0.002634142292663455 #100 Loss: 0.00263069081120193 #101 Loss: 0.0026273438706994057 #102 Loss: 0.0026240937877446413 #103 Loss: 0.0026209382340312004 #104 Loss: 0.002617868361994624 #105 Loss: 0.002614888595417142 #106 Loss: 0.0026119956746697426 #107 Loss: 0.002609172137454152 #108 Loss: 0.0026064326521009207 #109 Loss: 0.002603760687634349 #110 Loss: 0.00260116346180439 #111 Loss: 0.002598624676465988 #112 Loss: 0.0025961531791836023 #113 Loss: 0.0025937433820217848 #114 Loss: 0.0025913880672305822 #115 Loss: 0.0025890925899147987 #116 Loss: 0.002586849732324481 #117 Loss: 0.002584656234830618 #118 Loss: 0.0025825174525380135 #119 Loss: 0.0025804194156080484 #120 Loss: 0.0025783723685890436 #121 Loss: 0.002576368162408471 #122 Loss: 0.002574402838945389 #123 Loss: 0.002572478959336877 #124 Loss: 0.0025705902371555567 #125 Loss: 0.0025687431916594505 #126 Loss: 0.002566935494542122 #127 Loss: 0.0025651559699326754 #128 Loss: 0.002563410671427846 #129 Loss: 0.0025617002975195646 #130 Loss: 0.0025600148364901543 #131 Loss: 0.0025583638343960047 #132 Loss: 0.002556734485551715 #133 Loss: 0.002555140992626548 #134 Loss: 0.0025535663589835167 #135 Loss: 0.0025520166382193565 #136 Loss: 0.002550497418269515 #137 Loss: 0.0025489996187388897 #138 Loss: 0.002547516720369458 #139 Loss: 0.0025460589677095413 #140 Loss: 0.0025446258950978518 #141 Loss: 0.0025432079564779997 #142 Loss: 0.0025418128352612257 #143 Loss: 0.0025404333136975765 #144 Loss: 0.0025390759110450745 #145 Loss: 0.002537728287279606 #146 Loss: 0.0025364060420542955 #147 Loss: 0.0025350917130708694 #148 Loss: 0.002533797873184085 #149 Loss: 0.002532513812184334 #150 Loss: 0.0025312507059425116 #151 Loss: 0.0025300011038780212 #152 Loss: 0.0025287508033216 #153 Loss: 0.0025275293737649918 #154 Loss: 0.002526313764974475 #155 Loss: 0.00252510909922421 #156 Loss: 0.0025239146780222654 #157 Loss: 0.0025227360893040895 #158 Loss: 0.002521563321352005 #159 Loss: 0.002520401030778885 #160 Loss: 0.002519249450415373 #161 Loss: 0.0025181034579873085 #162 Loss: 0.0025169753935188055 #163 Loss: 0.0025158498901873827 #164 Loss: 0.0025147362612187862 #165 Loss: 0.002513629151508212 #166 Loss: 0.002512530190870166 #167 Loss: 0.0025114361196756363 #168 Loss: 0.0025103483349084854 #169 Loss: 0.0025092700961977243 #170 Loss: 0.0025081969797611237 #171 Loss: 0.0025071338750422 #172 Loss: 0.0025060747284442186 #173 Loss: 0.0025050221011042595 #174 Loss: 0.002503973664715886 #175 Loss: 0.002502931747585535 #176 Loss: 0.002501895884051919 #177 Loss: 0.0025008656084537506 #178 Loss: 0.00249984092079103 #179 Loss: 0.002498818328604102 #180 Loss: 0.002497798763215542 #181 Loss: 0.0024967871140688658 #182 Loss: 0.00249578058719635 #183 Loss: 0.0024947759229689837 #184 Loss: 0.0024937766138464212 #185 Loss: 0.002492778468877077 #186 Loss: 0.0024917826522141695 #187 Loss: 0.0024907945189625025 #188 Loss: 0.002489812206476927 #189 Loss: 0.002488828031346202 #190 Loss: 0.0024878503754734993 #191 Loss: 0.0024868694599717855 #192 Loss: 0.002485897159203887 #193 Loss: 0.002484926488250494 #194 Loss: 0.0024839574471116066 #195 Loss: 0.0024829902686178684 #196 Loss: 0.002482031239196658 #197 Loss: 0.0024810675531625748 #198 Loss: 0.002480114810168743 #199 Loss: 0.00247915368527174 #200 Loss: 0.0024782009422779083 #201 Loss: 0.002477245405316353 #202 Loss: 0.0024762984830886126 #203 Loss: 0.002475348999723792 #204 Loss: 0.002474398585036397 #205 Loss: 0.0024734551552683115 #206 Loss: 0.002472516382113099 #207 Loss: 0.002471569227054715 #208 Loss: 0.002470628125593066 #209 Loss: 0.0024696916807442904 #210 Loss: 0.002468749647960067 #211 Loss: 0.0024678176268935204 #212 Loss: 0.0024668758269399405 #213 Loss: 0.002465949160978198 #214 Loss: 0.0024650150444358587 #215 Loss: 0.00246407906524837 #216 Loss: 0.002463151467964053 #217 Loss: 0.002462216652929783 #218 Loss: 0.0024612878914922476 #219 Loss: 0.002460360061377287 #220 Loss: 0.0024594322312623262 #221 Loss: 0.0024585050996392965 #222 Loss: 0.002457576571032405 #223 Loss: 0.0024566520005464554 #224 Loss: 0.002455727430060506 #225 Loss: 0.002454800298437476 #226 Loss: 0.002453884808346629 #227 Loss: 0.0024529551155865192 #228 Loss: 0.002452034503221512 #229 Loss: 0.002451109467074275 #230 Loss: 0.0024501883890479803 #231 Loss: 0.002449269639328122 #232 Loss: 0.0024483499582856894 #233 Loss: 0.002447424689307809 #234 Loss: 0.0024465022142976522 #235 Loss: 0.0024455797392874956 #236 Loss: 0.0024446637835353613 #237 Loss: 0.002443745033815503 #238 Loss: 0.0024428225588053465 #239 Loss: 0.0024419049732387066 #240 Loss: 0.002440983895212412 #241 Loss: 0.0024400672409683466 #242 Loss: 0.002439146162942052 #243 Loss: 0.0024382262490689754 #244 Loss: 0.002437308896332979 #245 Loss: 0.0024363857228308916 #246 Loss: 0.002435472561046481 #247 Loss: 0.0024345542769879103 #248 Loss: 0.0024336313363164663 #249 Loss: 0.00243271142244339 #250 Loss: 0.00243179383687675 #251 Loss: 0.0024308778811246157 #252 Loss: 0.0024299558717757463 #253 Loss: 0.0024290340952575207 #254 Loss: 0.002428111620247364 #255 Loss: 0.002427193336188793 #256 Loss: 0.002426273887977004 #257 Loss: 0.002425355603918433 #258 Loss: 0.002424436155706644 #259 Loss: 0.002423514612019062 #260 Loss: 0.002422596327960491 #261 Loss: 0.0024216733872890472 #262 Loss: 0.0024207504466176033 #263 Loss: 0.002419829135760665 #264 Loss: 0.0024189057294279337 #265 Loss: 0.0024179841857403517 #266 Loss: 0.002417063107714057 #267 Loss: 0.0024161438923329115 #268 Loss: 0.0024152155965566635 #269 Loss: 0.0024142952170222998 #270 Loss: 0.0024133676197379827 #271 Loss: 0.002412450732663274 #272 Loss: 0.002411528956145048 #273 Loss: 0.0024105983320623636 #274 Loss: 0.0024096802808344364 #275 Loss: 0.0024087547790259123 #276 Loss: 0.0024078262504190207 #277 Loss: 0.0024068995844572783 #278 Loss: 0.0024059752468019724 #279 Loss: 0.002405051840469241 #280 Loss: 0.002404116792604327 #281 Loss: 0.0024031943175941706 #282 Loss: 0.0024022667203098536 #283 Loss: 0.002401341451331973 #284 Loss: 0.002400410594418645 #285 Loss: 0.0023994811344891787 #286 Loss: 0.0023985551670193672 #287 Loss: 0.0023976238444447517 #288 Loss: 0.0023966955486685038 #289 Loss: 0.0023957621306180954 #290 Loss: 0.002394832205027342 #291 Loss: 0.0023939006496220827 #292 Loss: 0.002392966765910387 #293 Loss: 0.00239203916862607 #294 Loss: 0.002391106216236949 #295 Loss: 0.0023901707027107477 #296 Loss: 0.002389240777119994 #297 Loss: 0.0023883050307631493 #298 Loss: 0.0023873704485595226 #299 Loss: 0.0023864342365413904 #300 Loss: 0.0023854991886764765 #301 Loss: 0.0023845701944082975 #302 Loss: 0.0023836297914385796 #303 Loss: 0.0023826900869607925 #304 Loss: 0.0023817545734345913 #305 Loss: 0.002380818361416459 #306 Loss: 0.0023798795882612467 #307 Loss: 0.0023789377883076668 #308 Loss: 0.0023780011106282473 #309 Loss: 0.0023770590778440237 #310 Loss: 0.0023761214688420296 #311 Loss: 0.0023751859553158283 #312 Loss: 0.0023742406629025936 #313 Loss: 0.002373295836150646 #314 Loss: 0.0023723554331809282 #315 Loss: 0.002371413866057992 #316 Loss: 0.0023704750929027796 #317 Loss: 0.002369531663134694 #318 Loss: 0.0023685868363827467 #319 Loss: 0.002367644337937236 #320 Loss: 0.002366698579862714 #321 Loss: 0.0023657495621591806 #322 Loss: 0.0023648033384233713 #323 Loss: 0.002363859675824642 #324 Loss: 0.0023629090283066034 #325 Loss: 0.0023619639687240124 #326 Loss: 0.0023610175121575594 #327 Loss: 0.002360069891437888 #328 Loss: 0.002359122270718217 #329 Loss: 0.0023581702262163162 #330 Loss: 0.0023572223726660013 #331 Loss: 0.002356275450438261 #332 Loss: 0.0023553166538476944 #333 Loss: 0.0023543667048215866 #334 Loss: 0.0023534176871180534 #335 Loss: 0.002352464245632291 #336 Loss: 0.0023515131324529648 #337 Loss: 0.0023505568969994783 #338 Loss: 0.0023496015928685665 #339 Loss: 0.002348652807995677 #340 Loss: 0.002347696339711547 #341 Loss: 0.0023467380087822676 #342 Loss: 0.0023457861971110106 #343 Loss: 0.0023448301944881678 #344 Loss: 0.0023438704665750265 #345 Loss: 0.002342912135645747 #346 Loss: 0.002341957064345479 #347 Loss: 0.0023409996647387743 #348 Loss: 0.0023400387726724148 #349 Loss: 0.002339078113436699 #350 Loss: 0.002338117454200983 #351 Loss: 0.0023371621500700712 #352 Loss: 0.0023361986968666315 #353 Loss: 0.00233523640781641 #354 Loss: 0.0023342801723629236 #355 Loss: 0.002333313226699829 #356 Loss: 0.002332353265956044 #357 Loss: 0.002331388648599386 #358 Loss: 0.0023304217029362917 #359 Loss: 0.0023294605780392885 #360 Loss: 0.002328496426343918 #361 Loss: 0.002327530412003398 #362 Loss: 0.0023265639320015907 #363 Loss: 0.0023255993146449327 #364 Loss: 0.0023246288765221834 #365 Loss: 0.0023236607667058706 #366 Loss: 0.002322700573131442 #367 Loss: 0.0023217289708554745 #368 Loss: 0.0023207550402730703 #369 Loss: 0.002319787396118045 #370 Loss: 0.002318824175745249 #371 Loss: 0.0023178488481789827 #372 Loss: 0.002316881902515888 #373 Loss: 0.0023159075062721968 #374 Loss: 0.002314941259101033 #375 Loss: 0.0023139675613492727 #376 Loss: 0.0023129950277507305 #377 Loss: 0.0023120215628296137 #378 Loss: 0.002311046002432704 #379 Loss: 0.002310073934495449 #380 Loss: 0.002309101400896907 #381 Loss: 0.0023081284016370773 #382 Loss: 0.00230714725330472 #383 Loss: 0.00230617169290781 #384 Loss: 0.0023051972966641188 #385 Loss: 0.002304219640791416 #386 Loss: 0.0023032415192574263 #387 Loss: 0.002302265027537942 #388 Loss: 0.0023012871388345957 #389 Loss: 0.002300310181453824 #390 Loss: 0.002299328101798892 #391 Loss: 0.0022983483504503965 #392 Loss: 0.0022973709274083376 #393 Loss: 0.002296391176059842 #394 Loss: 0.002295407932251692 #395 Loss: 0.00229442841373384 #396 Loss: 0.002293441677466035 #397 Loss: 0.0022924619261175394 #398 Loss: 0.0022914784494787455 #399 Loss: 0.0022904963698238134 #400 Loss: 0.0022895135916769505 #401 Loss: 0.0022885303478688 #402 Loss: 0.0022875459399074316 #403 Loss: 0.0022865592036396265 #404 Loss: 0.0022855724673718214 #405 Loss: 0.0022845915518701077 #406 Loss: 0.002283601788803935 #407 Loss: 0.002282612957060337 #408 Loss: 0.002281626919284463 #409 Loss: 0.0022806443739682436 #410 Loss: 0.0022796487901359797 #411 Loss: 0.0022786634508520365 #412 Loss: 0.0022776739206165075 #413 Loss: 0.0022766822949051857 #414 Loss: 0.0022756929975003004 #415 Loss: 0.0022747062612324953 #416 Loss: 0.00227371440269053 #417 Loss: 0.0022727230098098516 #418 Loss: 0.002271731849759817 #419 Loss: 0.0022707392927259207 #420 Loss: 0.002269746968522668 #421 Loss: 0.002268751384690404 #422 Loss: 0.002267759060487151 #423 Loss: 0.0022667646408081055 #424 Loss: 0.0022657769732177258 #425 Loss: 0.002264777896925807 #426 Loss: 0.002263784408569336 #427 Loss: 0.0022627897560596466 #428 Loss: 0.0022617937065660954 #429 Loss: 0.002260798355564475 #430 Loss: 0.0022597969509661198 #431 Loss: 0.002258802531287074 #432 Loss: 0.0022578088100999594 #433 Loss: 0.0022568099666386843 #434 Loss: 0.002255811123177409 #435 Loss: 0.0022548120468854904 #436 Loss: 0.0022538129705935717 #437 Loss: 0.0022528113331645727 #438 Loss: 0.002251812256872654 #439 Loss: 0.00225081411190331 #440 Loss: 0.0022498099133372307 #441 Loss: 0.002248812699690461 #442 Loss: 0.002247813157737255 #443 Loss: 0.0022468070965260267 #444 Loss: 0.002245804527774453 #445 Loss: 0.0022448061499744654 #446 Loss: 0.002243800787255168 #447 Loss: 0.0022427986841648817 #448 Loss: 0.0022417923901230097 #449 Loss: 0.0022407902870327234 #450 Loss: 0.0022397860884666443 #451 Loss: 0.002238777931779623 #452 Loss: 0.002237774431705475 #453 Loss: 0.00223676860332489 #454 Loss: 0.0022357627749443054 #455 Loss: 0.002234755316749215 #456 Loss: 0.0022337529808282852 #457 Loss: 0.0022327450569719076 #458 Loss: 0.0022317382972687483 #459 Loss: 0.002230728277936578 #460 Loss: 0.0022297168616205454 #461 Loss: 0.0022287091705948114 #462 Loss: 0.002227703807875514 #463 Loss: 0.002226694021373987 #464 Loss: 0.002225684467703104 #465 Loss: 0.0022246765438467264 #466 Loss: 0.0022236653603613377 #467 Loss: 0.0022226530127227306 #468 Loss: 0.002221642527729273 #469 Loss: 0.0022206297144293785 #470 Loss: 0.0022196185309439898 #471 Loss: 0.0022186103742569685 #472 Loss: 0.0022175933700054884 #473 Loss: 0.0022165849804878235 #474 Loss: 0.0022155700717121363 #475 Loss: 0.0022145553957670927 #476 Loss: 0.0022135439794510603 #477 Loss: 0.0022125281393527985 #478 Loss: 0.002211514627560973 #479 Loss: 0.002210496924817562 #480 Loss: 0.0022094829473644495 #481 Loss: 0.0022084659431129694 #482 Loss: 0.0022074568551033735 #483 Loss: 0.002206437522545457 #484 Loss: 0.0022054200526326895 #485 Loss: 0.0022044044453650713 #486 Loss: 0.0022033853456377983 #487 Loss: 0.0022023695055395365 #488 Loss: 0.002201352035626769 #489 Loss: 0.0022003341000527143 #490 Loss: 0.002199317794293165 #491 Loss: 0.0021982965990900993 #492 Loss: 0.0021972774993628263 #493 Loss: 0.00219626072794199 #494 Loss: 0.0021952392999082804 #495 Loss: 0.002194217639043927 #496 Loss: 0.002193200634792447 #497 Loss: 0.002192180836573243 #498 Loss: 0.0021911589428782463 #499 Loss: 0.0021901384461671114 #500 Loss: 0.002189117018133402 #501 Loss: 0.0021880920976400375 #502 Loss: 0.0021870729979127645 #503 Loss: 0.0021860499400645494 #504 Loss: 0.0021850315388292074 #505 Loss: 0.002184005454182625 #506 Loss: 0.0021829840261489153 #507 Loss: 0.002181959105655551 #508 Loss: 0.0021809397730976343 #509 Loss: 0.002179911592975259 #510 Loss: 0.002178889000788331 #511 Loss: 0.0021778629161417484 #512 Loss: 0.002176836598664522 #513 Loss: 0.002175812376663089 #514 Loss: 0.0021747888531535864 #515 Loss: 0.0021737609058618546 #516 Loss: 0.002172738080844283 #517 Loss: 0.002171711064875126 #518 Loss: 0.0021706840489059687 #519 Loss: 0.0021696598269045353 #520 Loss: 0.0021686323452740908 #521 Loss: 0.0021676046308130026 #522 Loss: 0.0021665773820132017 #523 Loss: 0.0021655478049069643 #524 Loss: 0.0021645205561071634 #525 Loss: 0.002163497731089592 #526 Loss: 0.002162465127184987 #527 Loss: 0.0021614336874336004 #528 Loss: 0.0021604085341095924 #529 Loss: 0.0021593787241727114 #530 Loss: 0.0021583528723567724 #531 Loss: 0.0021573195699602365 #532 Loss: 0.0021562918554991484 #533 Loss: 0.0021552571561187506 #534 Loss: 0.0021542287431657314 #535 Loss: 0.0021532000973820686 #536 Loss: 0.0021521716844290495 #537 Loss: 0.0021511383820325136 #538 Loss: 0.0021501071751117706 #539 Loss: 0.0021490773651748896 #540 Loss: 0.0021480440627783537 #541 Loss: 0.002147009363397956 #542 Loss: 0.0021459797862917185 #543 Loss: 0.002144948346540332 #544 Loss: 0.002143915044143796 #545 Loss: 0.0021428829059004784 #546 Loss: 0.002141848672181368 #547 Loss: 0.0021408156026154757 #548 Loss: 0.002139780670404434 #549 Loss: 0.0021387485321611166 #550 Loss: 0.002137715695425868 #551 Loss: 0.0021366847213357687 #552 Loss: 0.0021356476936489344 #553 Loss: 0.0021346136927604675 #554 Loss: 0.0021335785277187824 #555 Loss: 0.002132538938894868 #556 Loss: 0.002131509128957987 #557 Loss: 0.002130476525053382 #558 Loss: 0.0021294394973665476 #559 Loss: 0.002128403866663575 #560 Loss: 0.002127366838976741 #561 Loss: 0.0021263323724269867 #562 Loss: 0.002125295577570796 #563 Loss: 0.002124261111021042 #564 Loss: 0.0021232208237051964 #565 Loss: 0.002122187288478017 #566 Loss: 0.0021211470011621714 #567 Loss: 0.002120112767443061 #568 Loss: 0.002119072712957859 #569 Loss: 0.0021180338226258755 #570 Loss: 0.00211700308136642 #571 Loss: 0.0021159613970667124 #572 Loss: 0.0021149280946701765 #573 Loss: 0.0021138915326446295 #574 Loss: 0.0021128482185304165 #575 Loss: 0.0021118095610290766 #576 Loss: 0.0021107716020196676 #577 Loss: 0.002109734108671546 #578 Loss: 0.0021087005734443665 #579 Loss: 0.002107657492160797 #580 Loss: 0.00210661836899817 #581 Loss: 0.002105577616021037 #582 Loss: 0.0021045382600277662 #583 Loss: 0.002103500533849001 #584 Loss: 0.0021024595480412245 #585 Loss: 0.0021014243829995394 #586 Loss: 0.002100378042086959 #587 Loss: 0.002099341945722699 #588 Loss: 0.00209829886443913 #589 Loss: 0.0020972639322280884 #590 Loss: 0.0020962206181138754 #591 Loss: 0.002095181494951248 #592 Loss: 0.0020941428374499083 #593 Loss: 0.002093098359182477 #594 Loss: 0.002092057839035988 #595 Loss: 0.0020910180173814297 #596 Loss: 0.002089978661388159 #597 Loss: 0.0020889334846287966 #598 Loss: 0.0020878936629742384 #599 Loss: 0.0020868529099971056 #600 Loss: 0.002085815416648984 #601 Loss: 0.0020847702398896217 #602 Loss: 0.0020837283227592707 #603 Loss: 0.0020826871041208506 #604 Loss: 0.0020816465839743614 #605 Loss: 0.002080598147585988 #606 Loss: 0.002079556928947568 #607 Loss: 0.0020785192027688026 #608 Loss: 0.0020774772856384516 #609 Loss: 0.002076430944725871 #610 Loss: 0.00207538646645844 #611 Loss: 0.00207435037009418 #612 Loss: 0.002073307754471898 #613 Loss: 0.002072261879220605 #614 Loss: 0.0020712194964289665 #615 Loss: 0.0020701782777905464 #616 Loss: 0.0020691361278295517 #617 Loss: 0.0020680923480540514 #618 Loss: 0.0020670518279075623 #619 Loss: 0.0020660050213336945 #620 Loss: 0.0020649584475904703 #621 Loss: 0.0020639190915971994 #622 Loss: 0.002062877407297492 #623 Loss: 0.0020618324633687735 #624 Loss: 0.0020607870537787676 #625 Loss: 0.00205974536947906 #626 Loss: 0.0020587043836712837 #627 Loss: 0.0020576564129441977 #628 Loss: 0.0020566147286444902 #629 Loss: 0.002055570250377059 #630 Loss: 0.0020545274019241333 #631 Loss: 0.0020534859504550695 #632 Loss: 0.002052436349913478 #633 Loss: 0.0020513960625976324 #634 Loss: 0.0020503487903624773 #635 Loss: 0.0020493092015385628 #636 Loss: 0.002048263093456626 #637 Loss: 0.002047223038971424 #638 Loss: 0.002046172507107258 #639 Loss: 0.002045132452622056 #640 Loss: 0.002044085180386901 #641 Loss: 0.002043043961748481 #642 Loss: 0.0020420013461261988 #643 Loss: 0.0020409554708749056 #644 Loss: 0.002039908664301038 #645 Loss: 0.002038867911323905 #646 Loss: 0.0020378208719193935 #647 Loss: 0.0020367794204503298 #648 Loss: 0.0020357321482151747 #649 Loss: 0.002034691860899329 #650 Loss: 0.002033643191680312 #651 Loss: 0.002032601274549961 #652 Loss: 0.002031555864959955 #653 Loss: 0.0020305109210312366 #654 Loss: 0.002029466675594449 #655 Loss: 0.002028421498835087 #656 Loss: 0.002027378184720874 #657 Loss: 0.0020263351034373045 #658 Loss: 0.0020252885296940804 #659 Loss: 0.0020242466125637293 #660 Loss: 0.002023200271651149 #661 Loss: 0.002022160217165947 #662 Loss: 0.0020211131777614355 #663 Loss: 0.0020200731232762337 #664 Loss: 0.0020190232899039984 #665 Loss: 0.0020179767161607742 #666 Loss: 0.002016937592998147 #667 Loss: 0.002015892183408141 #668 Loss: 0.0020148518960922956 #669 Loss: 0.0020138081163167953 #670 Loss: 0.0020127587486058474 #671 Loss: 0.0020117172971367836 #672 Loss: 0.0020106742158532143 #673 Loss: 0.002009629737585783 #674 Loss: 0.002008582465350628 #675 Loss: 0.0020075414795428514 #676 Loss: 0.002006495138630271 #677 Loss: 0.0020054553169757128 #678 Loss: 0.002004409907385707 #679 Loss: 0.002003363100811839 #680 Loss: 0.002002324676141143 #681 Loss: 0.002001277869567275 #682 Loss: 0.002000238513574004 #683 Loss: 0.001999191241338849 #684 Loss: 0.0019981495570391417 #685 Loss: 0.0019971048459410667 #686 Loss: 0.0019960617646574974 #687 Loss: 0.0019950189162045717 #688 Loss: 0.0019939783960580826 #689 Loss: 0.001992932753637433 #690 Loss: 0.001991888275370002 #691 Loss: 0.0019908458925783634 #692 Loss: 0.001989804906770587 #693 Loss: 0.0019887599628418684 #694 Loss: 0.0019877159502357244 #695 Loss: 0.001986677525565028 #696 Loss: 0.0019856367725878954 #697 Loss: 0.001984592527151108 #698 Loss: 0.001983546419069171 #699 Loss: 0.001982505898922682 #700 Loss: 0.0019814646802842617 #701 Loss: 0.0019804220646619797 #702 Loss: 0.0019793810788542032 #703 Loss: 0.0019783375319093466 #704 Loss: 0.0019772977102547884 #705 Loss: 0.0019762550946325064 #706 Loss: 0.0019752129446715117 #707 Loss: 0.001974171493202448 #708 Loss: 0.001973131438717246 #709 Loss: 0.001972092781215906 #710 Loss: 0.0019710464403033257 #711 Loss: 0.0019700077828019857 #712 Loss: 0.001968963770195842 #713 Loss: 0.0019679246470332146 #714 Loss: 0.0019668852910399437 #715 Loss: 0.001965844538062811 #716 Loss: 0.001964807277545333 #717 Loss: 0.0019637665245682 #718 Loss: 0.0019627264700829983 #719 Loss: 0.00196168408729136 #720 Loss: 0.00196064286865294 #721 Loss: 0.0019596030469983816 #722 Loss: 0.001958560897037387 #723 Loss: 0.001957525731995702 #724 Loss: 0.001956489635631442 #725 Loss: 0.001955445623025298 #726 Loss: 0.001954407896846533 #727 Loss: 0.0019533671438694 #728 Loss: 0.0019523290684446692 #729 Loss: 0.0019512904109433293 #730 Loss: 0.0019502503564581275 #731 Loss: 0.0019492128631100059 #732 Loss: 0.0019481779308989644 #733 Loss: 0.0019471339182928205 #734 Loss: 0.0019461024785414338 #735 Loss: 0.0019450596300885081 #736 Loss: 0.0019440216710790992 #737 Loss: 0.001942987204529345 #738 Loss: 0.0019419504096731544 #739 Loss: 0.0019409122178331017 #740 Loss: 0.0019398737931624055 #741 Loss: 0.0019388411892578006 #742 Loss: 0.001937802298925817 #743 Loss: 0.001936764339916408 #744 Loss: 0.001935729756951332 #745 Loss: 0.0019346913322806358 #746 Loss: 0.0019336584955453873 #747 Loss: 0.0019326211186125875 #748 Loss: 0.001931585487909615 #749 Loss: 0.0019305492751300335 #750 Loss: 0.0019295121310278773 #751 Loss: 0.0019284767331555486 #752 Loss: 0.0019274475052952766 #753 Loss: 0.0019264090806245804 #754 Loss: 0.0019253772916272283 #755 Loss: 0.0019243452697992325 #756 Loss: 0.0019233074272051454 #757 Loss: 0.0019222754053771496 #758 Loss: 0.0019212419865652919 #759 Loss: 0.0019202110124751925 #760 Loss: 0.0019191773608326912 #761 Loss: 0.0019181432435289025 #762 Loss: 0.0019171085441485047 #763 Loss: 0.0019160775700584054 #764 Loss: 0.0019150450825691223 #765 Loss: 0.0019140088697895408 #766 Loss: 0.0019129784777760506 #767 Loss: 0.0019119485514238477 #768 Loss: 0.0019109140848740935 #769 Loss: 0.0019098850898444653 #770 Loss: 0.001908852718770504 #771 Loss: 0.001907822792418301 #772 Loss: 0.0019067925168201327 #773 Loss: 0.0019057630561292171 #774 Loss: 0.0019047335954383016 #775 Loss: 0.0019037051824852824 #776 Loss: 0.0019026693189516664 #777 Loss: 0.0019016433507204056 #778 Loss: 0.0019006148213520646 #779 Loss: 0.0018995892023667693 #780 Loss: 0.0018985569477081299 #781 Loss: 0.0018975288840010762 #782 Loss: 0.0018965002382174134 #783 Loss: 0.0018954715924337506 #784 Loss: 0.0018944436451420188 #785 Loss: 0.0018934140680357814 #786 Loss: 0.0018923920579254627 #787 Loss: 0.0018913644598796964 #788 Loss: 0.001890333485789597 #789 Loss: 0.0018893079832196236 #790 Loss: 0.0018882853910326958 #791 Loss: 0.001887254766188562 #792 Loss: 0.0018862345023080707 #793 Loss: 0.0018852058565244079 #794 Loss: 0.00188418326433748 #795 Loss: 0.0018831556662917137 #796 Loss: 0.0018821310950443149 #797 Loss: 0.0018811067566275597 #798 Loss: 0.001880083349533379 #799 Loss: 0.001879060291685164 #800 Loss: 0.0018780353711917996 #801 Loss: 0.0018770135939121246 #802 Loss: 0.0018759918166324496 #803 Loss: 0.0018749730661511421 #804 Loss: 0.0018739477964118123 #805 Loss: 0.0018729200819507241 #806 Loss: 0.0018719009822234511 #807 Loss: 0.001870879321359098 #808 Loss: 0.001869861502200365 #809 Loss: 0.0018688408890739083 #810 Loss: 0.001867820625193417 #811 Loss: 0.0018667984986677766 #812 Loss: 0.0018657720647752285 #813 Loss: 0.001864760648459196 #814 Loss: 0.0018637363100424409 #815 Loss: 0.0018627209356054664 #816 Loss: 0.0018617023015394807 #817 Loss: 0.0018606797093525529 #818 Loss: 0.0018596658483147621 #819 Loss: 0.0018586452351883054 #820 Loss: 0.0018576303264126182 #821 Loss: 0.001856614020653069 #822 Loss: 0.001855594920925796 #823 Loss: 0.0018545795464888215 #824 Loss: 0.001853560097515583 #825 Loss: 0.0018525446066632867 #826 Loss: 0.0018515288829803467 #827 Loss: 0.00185050955042243 #828 Loss: 0.0018494967371225357 #829 Loss: 0.0018484825268387794 #830 Loss: 0.001847467734478414 #831 Loss: 0.0018464555032551289 #832 Loss: 0.0018454398959875107 #833 Loss: 0.0018444285960868 #834 Loss: 0.0018434150842949748 #835 Loss: 0.0018424022709950805 #836 Loss: 0.001841390854679048 #837 Loss: 0.0018403776921331882 #838 Loss: 0.0018393672071397305 #839 Loss: 0.00183835718780756 #840 Loss: 0.0018373435596004128 #841 Loss: 0.001836334471590817 #842 Loss: 0.0018353263149037957 #843 Loss: 0.0018343138508498669 #844 Loss: 0.0018333062762394547 #845 Loss: 0.001832296489737928 #846 Loss: 0.0018312829779461026 #847 Loss: 0.0018302792450413108 #848 Loss: 0.0018292715540155768 #849 Loss: 0.0018282626988366246 #850 Loss: 0.0018272522138431668 #851 Loss: 0.001826247200369835 #852 Loss: 0.0018252409063279629 #853 Loss: 0.001824233098886907 #854 Loss: 0.0018232259899377823 #855 Loss: 0.001822225865907967 #856 Loss: 0.0018212157301604748 #857 Loss: 0.0018202122300863266 #858 Loss: 0.0018192125717177987 #859 Loss: 0.0018182039493694901 #860 Loss: 0.0018171994015574455 #861 Loss: 0.0018161969492211938 #862 Loss: 0.00181519181933254 #863 Loss: 0.0018141911132261157 #864 Loss: 0.001813187263906002 #865 Loss: 0.0018121921457350254 #866 Loss: 0.0018111892277374864 #867 Loss: 0.0018101868918165565 #868 Loss: 0.0018091824604198337 #869 Loss: 0.001808184664696455 #870 Loss: 0.0018071848899126053 #871 Loss: 0.0018061831360682845 #872 Loss: 0.0018051863880828023 #873 Loss: 0.0018041870789602399 #874 Loss: 0.0018031877698376775 #875 Loss: 0.0018021933501586318 #876 Loss: 0.0018011946231126785 #877 Loss: 0.0018001968273892999 #878 Loss: 0.0017991961212828755 #879 Loss: 0.001798199606128037 #880 Loss: 0.0017972056521102786 #881 Loss: 0.0017962086712941527 #882 Loss: 0.0017952205380424857 #883 Loss: 0.0017942209960892797 #884 Loss: 0.001793228555470705 #885 Loss: 0.0017922349506989121 #886 Loss: 0.001791241578757763 #887 Loss: 0.001790247275494039 #888 Loss: 0.0017892572795972228 #889 Loss: 0.0017882628599181771 #890 Loss: 0.0017872735625132918 #891 Loss: 0.0017862803069874644 #892 Loss: 0.001785286352969706 #893 Loss: 0.0017842984525486827 #894 Loss: 0.0017833089223131537 #895 Loss: 0.0017823184607550502 #896 Loss: 0.0017813298618420959 #897 Loss: 0.0017803410300984979 #898 Loss: 0.0017793524311855435 #899 Loss: 0.0017783649964258075 #900 Loss: 0.001777378492988646 #901 Loss: 0.0017763897776603699 #902 Loss: 0.0017754010623320937 #903 Loss: 0.001774418051354587 #904 Loss: 0.0017734314315021038 #905 Loss: 0.0017724483041092753 #906 Loss: 0.0017714608693495393 #907 Loss: 0.0017704787896946073 #908 Loss: 0.0017694927519187331 #909 Loss: 0.0017685088096186519 #910 Loss: 0.0017675244016572833 #911 Loss: 0.001766547211445868 #912 Loss: 0.001765563734807074 #913 Loss: 0.001764580956660211 #914 Loss: 0.0017636003904044628 #915 Loss: 0.0017626197077333927 #916 Loss: 0.0017616351833567023 #917 Loss: 0.0017606564797461033 #918 Loss: 0.0017596777761355042 #919 Loss: 0.0017587020993232727 #920 Loss: 0.001757721765898168 #921 Loss: 0.0017567459726706147 #922 Loss: 0.0017557647079229355 #923 Loss: 0.0017547908937558532 #924 Loss: 0.0017538117244839668 #925 Loss: 0.0017528367461636662 #926 Loss: 0.0017518624663352966 #927 Loss: 0.0017508859746158123 #928 Loss: 0.0017499076202511787 #929 Loss: 0.0017489390447735786 #930 Loss: 0.0017479656962677836 #931 Loss: 0.0017469911836087704 #932 Loss: 0.0017460188828408718 #933 Loss: 0.0017450453015044332 #934 Loss: 0.00174407206941396 #935 Loss: 0.0017430986044928432 #936 Loss: 0.0017421283992007375 #937 Loss: 0.001741158775985241 #938 Loss: 0.0017401917139068246 #939 Loss: 0.0017392206937074661 #940 Loss: 0.0017382544465363026 #941 Loss: 0.0017372820293530822 #942 Loss: 0.001736316829919815 #943 Loss: 0.0017353454604744911 #944 Loss: 0.0017343764193356037 #945 Loss: 0.0017334137810394168 #946 Loss: 0.0017324457876384258 #947 Loss: 0.0017314818687736988 #948 Loss: 0.001730515738017857 #949 Loss: 0.0017295492580160499 #950 Loss: 0.0017285882495343685 #951 Loss: 0.0017276207217946649 #952 Loss: 0.0017266602953895926 #953 Loss: 0.0017256977735087276 #954 Loss: 0.0017247359501197934 #955 Loss: 0.0017237764550372958 #956 Loss: 0.0017228134674951434 #957 Loss: 0.0017218533903360367 #958 Loss: 0.0017208936624228954 #959 Loss: 0.001719936146400869 #960 Loss: 0.001718974090181291 #961 Loss: 0.0017180143622681499 #962 Loss: 0.001717058359645307 #963 Loss: 0.0017161048017442226 #964 Loss: 0.001715144026093185 #965 Loss: 0.0017141870921477675 #966 Loss: 0.0017132310895249248 #967 Loss: 0.0017122785793617368 #968 Loss: 0.0017113216454163194 #969 Loss: 0.001710368786007166 #970 Loss: 0.0017094146460294724 #971 Loss: 0.001708458294160664 #972 Loss: 0.0017075081123039126 #973 Loss: 0.0017065554857254028 #974 Loss: 0.0017056027427315712 #975 Loss: 0.0017046512803062797 #976 Loss: 0.0017037037760019302 #977 Loss: 0.0017027502181008458 #978 Loss: 0.0017018018988892436 #979 Loss: 0.001700854511000216 #980 Loss: 0.0016999054932966828 #981 Loss: 0.001698957639746368 #982 Loss: 0.0016980115324258804 #983 Loss: 0.0016970612341538072 #984 Loss: 0.0016961172223091125 #985 Loss: 0.0016951701836660504 #986 Loss: 0.001694221398793161 #987 Loss: 0.0016932813450694084 #988 Loss: 0.0016923333751037717 #989 Loss: 0.0016913922736421227 #990 Loss: 0.0016904502408578992 #991 Loss: 0.0016895070439204574 #992 Loss: 0.0016885654767975211 #993 Loss: 0.001687621814198792 #994 Loss: 0.0016866797814145684 #995 Loss: 0.001685741706751287 #996 Loss: 0.0016847997903823853 #997 Loss: 0.0016838625306263566 #998 Loss: 0.0016829235246405005 #999 Loss: 0.0016819849843159318 Predicted data based on trained weights: Input (scaled): tensor([0.5000, 1.0000]) Output: tensor([0.9505])
MIT
nn.ipynb
LaudateCorpus1/pytorch_notebooks
Policy Gradients on HIV SimulatorAn example of using WhyNot for reinforcement learning. WhyNot presents a unified interface with the [OpenAI gym](https://github.com/openai/gym), which makes it easy to run sequential decision making experiments on simulators in WhyNot.In this notebook we compare four different policies on the WhyNot HIV simulator: a neural network policy trained by policy gradient, a random policy, the no treatment policy, and the max treatment policy.
%load_ext autoreload %autoreload 2 import whynot.gym as gym import numpy as np import matplotlib.pyplot as plt import torch from scripts import utils %matplotlib inline
/Users/miller_john/anaconda3/envs/whynot/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
HIV SimulatorThe HIV simulator is a differential equation simulator based onAdams, Brian Michael, et al. Dynamic multidrug therapies for HIV: Optimal and STI control approaches. North Carolina State University. Center for Research in Scientific Computation, 2004. APA.This HIV model has a set of 6 state and 20 simulation parameters to parameterize the dynamics. The state variables are:* uninfected_T1: uninfected CD4+ T-lymphocytes (cells/ml)* infected_T1: infected CD4+ T-lymphocytes (cells/ml)* uninfected_T2: uninfected macrophages (cells/ml)* infected_T2: infected macrophages (cells/ml)* free_virus: free virus (copies/ml)* immune_response: immune response CTL E (cells/ml)The simulator models two types of drugs: RT (reverse-transcriptase inhibitors) inhibitors and protease inhibitors. RT inhibitors are more effective on CD4+ T-lymphocytes (T1) cells, while protease inhibitors are more effective on macrophages (T2) cells.There are 4 possible actions:* Action 0: no drug, costs 0* Action 1: protease inhibitor only, costs 1800* Action 2: RT inhibitor only, costs 9800* Action 3: both RT inhibitor and protease inhibitor, costs 11600The reward at each step is defined based on the current state and the action. We follow the optimization objective introduced in the original paper by Adams et. al. Intuitively, virus is bad and immune response is good.$$\text{reward} = -0.1 * \text{free}\_\text{virus} + 1000 * \text{immune}\_\text{response} - \text{action}\_\text{cost}$$
# Make the HIV environment and set random seed. env = gym.make('HIV-v0') np.random.seed(1) env.seed(1) torch.manual_seed(1)
_____no_output_____
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
Compared Policies Base Policy ClassWe define a base `Policy` class. Every policy has a `sample_action` function that takes an observation and returns an action. NNPolicyA 1-layer feed forward neural network with state dimension as input dimension, one hidden layer of 8 neurons (the state dim is 6), and action dimension as output dimension. We use batch normalization and ReLU activation. No Treatment PolicyNever apply any treatment, i.e. action = 0 (corresponds to $\epsilon_1 = 0$ and $\epsilon_2 = 0$ in the simulation) for all observations. Max Treatment PolicyNever apply max treatment, i.e. action = 3 (corresponds to $\epsilon_1 = 0.7$ and $\epsilon_2 = 0.3$ in the simulation) for all observations. Random PolicyTakes a random action regardless of the obervation.
class NoTreatmentPolicy(utils.Policy): """The policy of always no treatment.""" def __init__(self): super(NoTreatmentPolicy, self).__init__(env) def sample_action(self, obs): return 0 class MaxTreatmentPolicy(utils.Policy): """The policy of always applying both RT inhibitor and protease inhibitor.""" def __init__(self): super(MaxTreatmentPolicy, self).__init__(env) def sample_action(self, obs): return 3 class RandomPolicy(utils.Policy): """The policy of picking a random action at each time step.""" def __init__(self): super(RandomPolicy, self).__init__(env) def sample_action(self, obs): return np.random.randint(4)
_____no_output_____
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
Policy Gradient Implementation DetailsFor a given state $s$, a policy can be written as a probability distribution $\pi_\theta(s, a)$ over actions $a$, where $\theta$ is the parameter of the policy.The reinforcement learning objective is to learn a $\theta^*$ that maximizes the objective function $\;\;\;\; J(\theta) = E_{\tau \sim \pi_\theta}[r(\tau)]$,where $\tau$ is the trajectory sampled according to policy $\pi_\theta$ and $r(\tau)$ is the sum of discounted rewards on trajectory $\tau$.The policy gradient approach is to take the gradient of this objective $\;\;\;\; \nabla_\theta J(\theta) = \nabla_\theta \int \pi_\theta(\tau)r(\tau)d\tau = \int \pi_\theta(\tau) \nabla_\theta \log\pi_\theta(\tau)r(\tau)d\tau = E_{\tau \sim \pi_\theta(\tau)}[\nabla_\theta \log \pi_\theta(\tau)r(\tau)]$ Reward to GoHere, $\log \pi_\theta(\tau) = \sum_{t=0}^T \log \pi_\theta(a_t \mid s_t)$ and $r(\tau) = \sum_{t=0}^T \gamma^t r_t$. Since the reward $r_t$ at time $t$ is not influenced by states and actions that happen after $t$, we can replace $ \log \pi_\theta(\tau)r(\tau)$ in the equation above by $\;\;\;\;\sum_{t=0}^T \log \pi_\theta(a_t \mid s_t) \; \gamma^t \sum_{t'=t}^T \gamma^{t'-t} r_{t'}$. This technique is referred to as "reward to go". In practice, it often works better to omit the $\gamma^t$ factor. As a short hand, we will denote $\gamma^{t'-t} r_{t'}$ as $Q_t$. In a sense, $Q_t$ represents the "reward to go". SamplingIn practice this can be estimated by sampling trajectories $\tau^{(i)} = \{s_0^{(i)}, a_0^{(i)}, s_1^{(i)}, a_1^{(i)}, \cdots\} \sim \pi_\theta(\tau)$ and computing the gradient (w.r.t. $\theta$) of loss function$\;\;\;\; Loss = -\frac{1}{N} \sum_i [\sum_{t=0}^T \log \pi_\theta(a_t^{(i)} \mid s_t^{(i)}) \;Q_t^{(i)}]$. Baseline and AdvantageIn practice, for better stability in training, we demean the "reward to go" $Q_t$ by a baseline $b_t$. This can be a constant or a neural network function of the state. The demeaned quantity $A_t = Q_t - b_t$ is referred to ads "advantage", as it represents how much better the action is compared to average. We can also hope for better stability by normalizing the advantage by $\tilde A_t = (A_t - mean(A_t)) / std(A_t)$.
learned_policy = utils.run_training_loop( env=env, n_iter=300, max_episode_length=100, batch_size=1000, learning_rate=1e-3) policies = { "learned_policy": learned_policy, "no_treatment": NoTreatmentPolicy(), "max_treatment": MaxTreatmentPolicy(), "random": RandomPolicy(), } utils.plot_sample_trajectory(policies, 100, state_names=wn.hiv.State.variable_names())
Total reward for learned_policy: 4802102.5 Total reward for no_treatment: 1762320.5 Total reward for max_treatment: 2147030.5 Total reward for random: 2171225.0
MIT
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
Testing Fastpages Notebook Blog Post> A tutorial of fastpages for Jupyter notebooks.- toc: true - badges: true- comments: true- categories: [jupyter]- image: images/chart-preview.png AboutThis notebook is a demonstration of some of capabilities of [fastpages](https://github.com/fastai/fastpages) with notebooks.With `fastpages` you can save your jupyter notebooks into the `_notebooks` folder at the root of your repository, and they will be automatically be converted to Jekyll compliant blog posts! Front MatterThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:``` Title> Awesome summary- toc: true- branch: master- badges: true- comments: true- author: Hamel Husain & Jeremy Howard- categories: [fastpages, jupyter]```- Setting `toc: true` will automatically generate a table of contents- Setting `badges: true` will automatically include GitHub and Google Colab links to your notebook.- Setting `comments: true` will enable commenting on your blog post, powered by [utterances](https://github.com/utterance/utterances).More details and options for front matter can be viewed on the [front matter section](https://github.com/fastai/fastpagesfront-matter-related-options) of the README. Markdown Shortcuts A `hide` comment at the top of any code cell will hide **both the input and output** of that cell in your blog post.A `hide_input` comment at the top of any code cell will **only hide the input** of that cell.
#hide_input print('The comment #hide_input was used to hide the code that produced this.')
The comment #hide_input was used to hide the code that produced this.
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
put a `collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:
#collapse-hide import pandas as pd import altair as alt
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
put a `collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:
#collapse-show cars = 'https://vega.github.io/vega-datasets/data/cars.json' movies = 'https://vega.github.io/vega-datasets/data/movies.json' sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv' stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv' flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Interactive Charts With AltairCharts made with Altair remain interactive. Example charts taken from [this repo](https://github.com/uwdata/visualization-curriculum), specifically [this notebook](https://github.com/uwdata/visualization-curriculum/blob/master/altair_interaction.ipynb).
# hide df = pd.read_json(movies) # load movies data genres = df['Major_Genre'].unique() # get unique field values genres = list(filter(lambda d: d is not None, genres)) # filter out None values genres.sort() # sort alphabetically #hide mpaa = ['G', 'PG', 'PG-13', 'R', 'NC-17', 'Not Rated']
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Example 1: DropDown
# single-value selection over [Major_Genre, MPAA_Rating] pairs # use specific hard-wired values as the initial selected values selection = alt.selection_single( name='Select', fields=['Major_Genre', 'MPAA_Rating'], init={'Major_Genre': 'Drama', 'MPAA_Rating': 'R'}, bind={'Major_Genre': alt.binding_select(options=genres), 'MPAA_Rating': alt.binding_radio(options=mpaa)} ) # scatter plot, modify opacity based on selection alt.Chart(movies).mark_circle().add_selection( selection ).encode( x='Rotten_Tomatoes_Rating:Q', y='IMDB_Rating:Q', tooltip='Title:N', opacity=alt.condition(selection, alt.value(0.75), alt.value(0.05)) )
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Example 2: Tooltips
alt.Chart(movies).mark_circle().add_selection( alt.selection_interval(bind='scales', encodings=['x']) ).encode( x='Rotten_Tomatoes_Rating:Q', y=alt.Y('IMDB_Rating:Q', axis=alt.Axis(minExtent=30)), # use min extent to stabilize axis title placement tooltip=['Title:N', 'Release_Date:N', 'IMDB_Rating:Q', 'Rotten_Tomatoes_Rating:Q'] ).properties( width=600, height=400 )
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Example 3: More Tooltips
# select a point for which to provide details-on-demand label = alt.selection_single( encodings=['x'], # limit selection to x-axis value on='mouseover', # select on mouseover events nearest=True, # select data point nearest the cursor empty='none' # empty selection includes no data points ) # define our base line chart of stock prices base = alt.Chart().mark_line().encode( alt.X('date:T'), alt.Y('price:Q', scale=alt.Scale(type='log')), alt.Color('symbol:N') ) alt.layer( base, # base line chart # add a rule mark to serve as a guide line alt.Chart().mark_rule(color='#aaa').encode( x='date:T' ).transform_filter(label), # add circle marks for selected time points, hide unselected points base.mark_circle().encode( opacity=alt.condition(label, alt.value(1), alt.value(0)) ).add_selection(label), # add white stroked text to provide a legible background for labels base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode( text='price:Q' ).transform_filter(label), # add text labels for stock prices base.mark_text(align='left', dx=5, dy=-5).encode( text='price:Q' ).transform_filter(label), data=stocks ).properties( width=700, height=400 )
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Data TablesYou can display tables per the usual way in your blog:
movies = 'https://vega.github.io/vega-datasets/data/movies.json' df = pd.read_json(movies) # display table with pandas df[['Title', 'Worldwide_Gross', 'Production_Budget', 'Distributor', 'MPAA_Rating', 'IMDB_Rating', 'Rotten_Tomatoes_Rating']].head()
_____no_output_____
Apache-2.0
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!
import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv') df.head() # !pip install pandas==0.23.4 # Weight seems to make the most sense as a dependent variable. We would expect weight to go down as exercise time goes up--but what effect does age have? pd.crosstab(df['exercise_time'], df['weight']) # This is useless; I need bins for both columns. weight_bins = pd.cut(df['weight'], 5) time_bins = pd.cut(df['exercise_time'], 5) ct1 = pd.crosstab(time_bins, weight_bins, normalize='columns') ct1 # Data is a little messy. The lowest weights have a high % of people with lots of exercise, and the highest weight has 0% of such people, though the sample is small. # However, looking at those who don't exercise, their percentage at the highest weight goes *down*--and the second-least-exercise group has similarly messy data. # So what effect does age have? age_bins = pd.cut(df['age'], 5) ct2 = pd.crosstab(age_bins, weight_bins, normalize='columns') ct2 ct3 = pd.crosstab(age_bins, time_bins, normalize='columns') ct3 # The relationships still aren't clear to me. I'm going to try more things. ct1.plot(kind='bar'); ct4 = pd.crosstab(weight_bins, [time_bins, age_bins], normalize='columns') ct4 ct5 = ct4.iloc[:, [0,1,2,3,4]] ct5 ct5.plot(kind='bar'); # I think it would be more helpful to switch time and age. ct6 = pd.crosstab(weight_bins, [age_bins, time_bins], normalize='columns') ct6 age_bins2 = pd.cut(df['age'], 3) time_bins2 = pd.cut(df['exercise_time'], 3) weight_bins2 = pd.cut(df['weight'], 3) ct7 = pd.crosstab(weight_bins2, [age_bins2, time_bins2], normalize='columns') ct7 # By reducing the number of bins, I think this shows the relationships the clearest. Regardless of age, the low weight goes up as exercise time does, # the middle weight is more of a standard distribution vis-a-vis exercise time, and the highest weight goes down as exercise time increases # (and is notably nonexistent for the largest exercise time). ct7.plot(kind='bar') # This is not what I want. I have to flip the axes. ct8 = pd.crosstab([age_bins2, time_bins2], weight_bins2, normalize='columns') ct8 ct8.plot(kind='bar'); # For all ages, the highest weight class drops dramatically once they get a moderate amount of exercise (and disappears entirely with a lot of exercise). # The lowest age class conforms to my hypothesis: the lowest weight class rises as exercise time goes up, while the moderate weight class is more of a bell curve. # The medium age class mostly conforms to my hypothesis, except the moderate weight class dips slightly as exercise goes from low to medium. # The oldest age class is interesting in that the percentage of old people of *any* weight drops as exercise increases. This is likely because older people # are less likely to exercise, period--see above crosstabs.
_____no_output_____
MIT
module3-databackedassertions/LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data
1. Finding Correlation from Scratch
factor = [] for i in df.values: factor.append(round(i[4]/i[2],5)) # i[2] = Views, i[4] = Comments df['view_to_comments'] = factor df.head() print("Minimum : ", min(df['view_to_comments'])) print("Maximum : ", max(df['view_to_comments'])) print(df['view_to_comments'].mode())
Minimum : 0.00013 Maximum : 0.05427 0 0.00137 dtype: float64
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
2. Adding Predicted Comments Column
comments = [] for i in df['views']: comments.append(int(i * .00137)) df['pred_comments'] = comments df.head()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Ploting Correlation 3. Correlation b/w Comments and Views
data = [] for i in df.values: data.append([i[2],i[4]]) df_ = pd.DataFrame(data, columns = ['views','comments']) views = list(df_.sort_values(by = 'views')['views']) comments = list(df_.sort_values(by = 'views')['comments']) fig, ax = plt.subplots(figsize = (15,4)) ax.plot(views,comments) plt.show() df.head()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
4. Correlation b/w Views & [Comments, Predicted Comments]
data = [] for i in df.values: data.append([i[2],i[4],i[10]]) df_ = pd.DataFrame(data, columns = ['views','comments','pred_comments']) views = list(df_.sort_values(by = 'views')['views']) likes = list(df_.sort_values(by = 'views')['comments']) likes_ = list(df_.sort_values(by = 'views')['pred_comments']) fig, ax = plt.subplots(figsize = (15,4)) plt.plot(views,likes , label = 'Actual') plt.plot(views,likes_, label = 'Predicted') plt.legend() plt.show() df.head()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
5. Finding Loss Using MSE 5.1) Finding M-Error
total_error = [] for i in df.values: t = i[4]-i[10] # i[4] is Actual Comments, i[10] is Predicted Comments if (t >= 0): total_error.append(t) else: total_error.append(-t) sum(total_error)/len(total_error)
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
5.2) Finding View to Comments Ratio
view_to_comments = [] for i in df.values: view_to_comments.append(round(i[4]/i[2],5)) df['view_to_comments'] = view_to_comments st = int(df['view_to_comments'].min() * 100000) end = int(df['view_to_comments'].max() * 100000) factors = [] for i in range(st,end + 1 , 1): factors.append(i/100000)
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization