markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
4. get the data The top level element is the Query. For each query fields can be added (usually statistics / measures) that you want to get information on. A Query can either be done on a single region, or on multiple regions (e.g. all Bundesländer). Single RegionIf I want information - e.g. all births for the past years in Berlin:
# create a query for the region 11 query = Query.region('11') # add a field (the statstic) to the query field_births = query.add_field('BEV001') # get the data of this query query.results().head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To get the short description in the result data frame instead of the cryptic ID (e.g. "Lebend Geborene" instead of BEV001) set the argument "verbose_statsitics"=True in the resutls:
query.results(verbose_statistics =True).head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
Now we only get the information about the count of births per year and the source of the data (year, value and source are default fields).But there is more information in the statistic that we can get information on.Let's look at the meta data of the statstic:
# get information on the field field_births.get_info()
kind: OBJECT description: Lebend Geborene arguments: year: LIST of type SCALAR(Int) statistics: LIST of type ENUM(BEV001Statistics) enum values: R12612: Statistik der Geburten ALTMT1: LIST of type ENUM(ALTMT1) enum values: ALT000B20: unter 20 Jahre ALT020B25: 20 bis unter 25 Jahre ALT025B30: 25 bis unter 30 Jahre ALT030B35: 30 bis unter 35 Jahre ALT035B40: 35 bis unter 40 Jahre ALT040UM: 40 Jahre und mehr GESAMT: Gesamt GES: LIST of type ENUM(GES) enum values: GESM: männlich GESW: weiblich GESAMT: Gesamt NATEL1: LIST of type ENUM(NATEL1) enum values: NATAAO: Mutter und Vater Ausländer, ohne Angabe der Nationalität NATDDDO: Mutter und Vater Deutsche, Mutter Deutsche und Vater o.Angabe der Nat. NATEETA: ein Elternteil Ausländer GESAMT: Gesamt NAT: LIST of type ENUM(NAT) enum values: NATA: Ausländer(innen) NATD: Deutsche GESAMT: Gesamt LEGIT2: LIST of type ENUM(LEGIT2) enum values: LEGIT01A: Eltern miteinander verheiratet LEGIT02A: Eltern nicht miteinander verheiratet GESAMT: Gesamt BEVM01: LIST of type ENUM(BEVM01) enum values: MONAT01: Januar MONAT02: Februar MONAT03: März MONAT04: April MONAT05: Mai MONAT06: Juni MONAT07: Juli MONAT08: August MONAT09: September MONAT10: Oktober MONAT11: November MONAT12: Dezember GESAMT: Gesamt filter: INPUT_OBJECT(BEV001Filter) fields: id: Interne eindeutige ID year: Jahr des Stichtages value: Wert source: Quellenverweis zur GENESIS Regionaldatenbank ALTMT1: Altersgruppen der Mutter (unter 20 bis 40 u.m.) GES: Geschlecht NATEL1: Nationalität der Eltern NAT: Nationalität LEGIT2: Legitimität BEVM01: Monat der Geburt enum values: None
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
The arguments tell us what we can use for filtering (e.g. only data on baby girls (female)).The fields tell us what more information can be displayed in our results.
# add filter field_births.add_args({'GES': 'GESW'}) # now only about half the amount of births are returned as only the results for female babies are queried query.results().head() # add the field NAT (nationality) to the results field_births.add_field('NAT')
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
**CAREFUL**: The information for the fields (e.g. nationality) is by default returned as a total amount. Therefore - if no argument "NAT" is specified in addition to the field, then only "None" will be displayed.In order to get information on all possible values, the argument "ALL" needs to be added:(the rows with value "None" are the aggregated values of all options)
field_births.add_args({'NAT': 'ALL'}) query.results().head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To display the short description of the enum values instead of the cryptic IDs (e.g. Ausländer(innen) instead of NATA), set the argument "verbose_enums = True" on the results:
query.results(verbose_enums=True).head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
Multiple Regions To display data for multiple single regions, a list with region IDs can be used:
query_multiple = Query.region(['01', '02']) query_multiple.add_field('BEV001') query_multiple.results().sort_values('year').head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To display data for e.g. all 'Bundesländer' or for all regions within a Bundesland, you can use the function `all_regions()`:- specify nuts level- specify lau level- specify parent ID (Careful: not only the regions for the next lower level will be returned, but all levels - e.g. if you specify a parent on nuts level 1 then the "children" on nuts 2 but also the "grandchildren" on nuts 3, lau 1 and lau 2 will be returned)
# get data for all Bundesländer query_all = Query.all_regions(nuts=1) query_all.add_field('BEV001') query_all.results().sort_values('year').head(12) # get data for all regions within Brandenburg query_all = Query.all_regions(parent='12') query_all.add_field('BEV001') query_all.results().head() # get data for all nuts 3 regions within Brandenburg query_all = Query.all_regions(parent='12', nuts=3) query_all.add_field('BEV001') query_all.results().sort_values('year').head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
Chapter 4`Original content created by Cam Davidson-Pilon``Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`______ The greatest theorem never toldThis chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far. The Law of Large NumbersLet $Z_i$ be $N$ independent samples from some probability distribution. According to *the Law of Large numbers*, so long as the expected value $E[Z]$ is finite, the following holds,$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$In words:> The average of a sequence of random variables from the same distribution converges to the expected value of that distribution.This may seem like a boring result, but it will be the most useful tool you use. Intuition If the above Law is somewhat surprising, it can be made more clear by examining a simple example. Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average:$$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values:\begin{align}\frac{1}{N} \sum_{i=1}^N \;Z_i& =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\\\[5pt]& = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\\\[5pt]& = c_1 \times \text{ (approximate frequency of $c_1$) } \\\\ & \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\\\[5pt]& \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\\\[5pt]& = E[Z]\end{align}Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost *any distribution*, minus some important cases we will encounter later. Example____Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables. We sample `sample_size = 100000` Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to it's parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`.
%matplotlib inline import numpy as np from IPython.core.pylabtools import figsize import matplotlib.pyplot as plt figsize( 12.5, 5 ) sample_size = 100000 expected_value = lambda_ = 4.5 poi = np.random.poisson N_samples = range(1,sample_size,100) for k in range(3): samples = poi( lambda_, sample_size ) partial_average = [ samples[:i].mean() for i in N_samples ] plt.plot( N_samples, partial_average, lw=1.5,label="average \ of $n$ samples; seq. %d"%k) plt.plot( N_samples, expected_value*np.ones_like( partial_average), ls = "--", label = "true expected value", c = "k" ) plt.ylim( 4.35, 4.65) plt.title( "Convergence of the average of \n random variables to its \ expected value" ) plt.ylabel( "average of $n$ samples" ) plt.xlabel( "# of samples, $n$") plt.legend();
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how *jagged and jumpy* the average is initially, then *smooths* out). All three paths *approach* the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for *flirting*: convergence. Another very relevant question we can ask is *how quickly am I converging to the expected value?* Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait — *compute on average*? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity:$$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them:$$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$By computing the above many, $N_y$, times (remember, it is random), and averaging them:$$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$Finally, taking the square root:$$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$
figsize( 12.5, 4) N_Y = 250 #use this many to approximate D(N) N_array = np.arange( 1000, 50000, 2500 ) #use this many samples in the approx. to the variance. D_N_results = np.zeros( len( N_array ) ) lambda_ = 4.5 expected_value = lambda_ #for X ~ Poi(lambda) , E[ X ] = lambda def D_N( n ): """ This function approx. D_n, the average variance of using n samples. """ Z = poi( lambda_, (n, N_Y) ) average_Z = Z.mean(axis=0) return np.sqrt( ( (average_Z - expected_value)**2 ).mean() ) for i,n in enumerate(N_array): D_N_results[i] = D_N(n) plt.xlabel( "$N$" ) plt.ylabel( "expected squared-distance from true value" ) plt.plot(N_array, D_N_results, lw = 3, label="expected distance between\n\ expected value and \naverage of $N$ random variables.") plt.plot( N_array, np.sqrt(expected_value)/np.sqrt(N_array), lw = 2, ls = "--", label = r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$" ) plt.legend() plt.title( "How 'fast' is the sample average converging? " );
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the *rate* of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but *20 000* more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of convergence to $E[Z]$ of the Law of Large Numbers is $$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the *statistical* point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a *larger* $N$ is fine too. How do we compute $Var(Z)$ though?The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance:$$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$ Expected values and probabilities There is an even less explicit relationship between expected value and estimating probabilities. Define the *indicator function*$$\mathbb{1}_A(x) = \begin{cases} 1 & x \in A \\\\ 0 & else\end{cases}$$Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by:$$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 5, and we have many samples from a $Exp(.5)$ distribution. $$ P( Z > 5 ) = \sum_{i=1}^N \mathbb{1}_{z > 5 }(Z_i) $$
N = 10000 print( np.mean( [ np.random.exponential( 0.5 ) > 5 for i in range(N) ] ) )
0.0001
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
What does this all have to do with Bayesian statistics? *Point estimates*, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior. When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower). We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us *confidence in how unconfident we should be*. The next section deals with this issue. The Disorder of Small NumbersThe Law of Large Numbers is only valid as $N$ gets *infinitely* large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this. Example: Aggregated geographic dataOften data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can *fail* for areas with small populations.We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does **not** vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be:$$ \text{height} \sim \text{Normal}(150, 15 ) $$We aggregate the individuals at the county level, so we only have data for the *average in the county*. What might our dataset look like?
figsize( 12.5, 4) std_height = 15 mean_height = 150 n_counties = 5000 pop_generator = np.random.randint norm = np.random.normal #generate some artificial population numbers population = pop_generator(100, 1500, n_counties ) average_across_county = np.zeros( n_counties ) for i in range( n_counties ): #generate some individuals and take the mean average_across_county[i] = norm(mean_height, 1./std_height, population[i] ).mean() #located the counties with the apparently most extreme average heights. i_min = np.argmin( average_across_county ) i_max = np.argmax( average_across_county ) #plot population size vs. recorded average plt.scatter( population, average_across_county, alpha = 0.5, c="#7A68A6") plt.scatter( [ population[i_min], population[i_max] ], [average_across_county[i_min], average_across_county[i_max] ], s = 60, marker = "o", facecolors = "none", edgecolors = "#A60628", linewidths = 1.5, label="extreme heights") plt.xlim( 100, 1500 ) plt.title( "Average height vs. County Population") plt.xlabel("County Population") plt.ylabel("Average height in county") plt.plot( [100, 1500], [150, 150], color = "k", label = "true expected \ height", ls="--" ) plt.legend(scatterpoints = 1);
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
What do we observe? *Without accounting for population sizes* we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do *not* necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively. We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 4000, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights.
print("Population sizes of 10 'shortest' counties: ") print(population[ np.argsort( average_across_county )[:10] ], '\n') print("Population sizes of 10 'tallest' counties: ") print(population[ np.argsort( -average_across_county )[:10] ])
Population sizes of 10 'shortest' counties: [109 135 135 133 109 157 175 120 105 131] Population sizes of 10 'tallest' counties: [122 133 313 109 124 280 106 198 326 216]
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers. Example: Kaggle's *U.S. Census Return Rate Challenge*Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population:
figsize( 12.5, 6.5 ) data = np.genfromtxt( "./data/census_data.csv", skip_header=1, delimiter= ",") plt.scatter( data[:,1], data[:,0], alpha = 0.5, c="#7A68A6") plt.title("Census mail-back rate vs Population") plt.ylabel("Mail-back rate") plt.xlabel("population of block-group") plt.xlim(-100, 15e3 ) plt.ylim( -5, 105) i_min = np.argmin( data[:,0] ) i_max = np.argmax( data[:,0] ) plt.scatter( [ data[i_min,1], data[i_max, 1] ], [ data[i_min,0], data[i_max,0] ], s = 60, marker = "o", facecolors = "none", edgecolors = "#A60628", linewidths = 1.5, label="most extreme points") plt.legend(scatterpoints = 1);
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
The above is a classic phenomenon in statistics. I say *classic* referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact). I am perhaps overstressing the point and maybe I should have titled the book *"You don't have big data problems!"*, but here again is an example of the trouble with *small datasets*, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are *stable*, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results. For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript [The Most Dangerous Equation](http://nsm.uh.edu/~dgraur/niv/TheMostDangerousEquation.pdf). Example: How to order Reddit submissionsYou may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is **not** a good reflection of the true value of the product.This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with *falsely-substandard* ratings of around 4.8. How can we correct this?Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, called submissions, for people to comment on. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently.How would you determine which submissions are the best? There are a number of ways to achieve this:1. *Popularity*: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very *popular*, the submission is likely more controversial than best.2. *Difference*: Using the *difference* of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the *Top* submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best.3. *Time adjusted*: Consider using Difference divided by the age of the submission. This creates a *rate*, something like *difference per second*, or *per minute*. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions).3. *Ratio*: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is *more likely* to be better.I used the phrase *more likely* for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the later with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely.What we really want is an estimate of the *true upvote ratio*. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me. One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though:1. Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision. 2. Biased data: Reddit is composed of different subpages, called subreddits. Two examples are *r/aww*, which posts pics of cute animals, and *r/politics*. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same. In light of these, I think it is better to use a `Uniform` prior.With our prior in place, we can find the posterior of the true upvote ratio. The Python script `top_showerthoughts_submissions.py` will scrape the best posts from the `showerthoughts` community on Reddit. This is a text-only community so the title of each post *is* the post. Below is the top post as well as some other sample posts:
#adding a number to the end of the %run call with get the ith top post. %run top_showerthoughts_submissions.py 2 print("Post contents: \n") print(top_post) """ contents: an array of the text from the last 100 top submissions to a subreddit votes: a 2d numpy array of upvotes, downvotes for each submission. """ n_submissions = len(votes) submissions = np.random.randint( n_submissions, size=4) print("Some Submissions (out of %d total) \n-----------"%n_submissions) for i in submissions: print('"' + contents[i] + '"') print("upvotes/downvotes: ",votes[i,:], "\n")
Some Submissions (out of 98 total) ----------- "Rappers from the 90's used guns when they had beef rappers today use Twitter." upvotes/downvotes: [32 3] "All polls are biased towards people who are willing to take polls" upvotes/downvotes: [1918 101] "Taco Bell should give customers an extra tortilla so they can make a burrito out of all the stuff that spilled out of the other burritos they ate." upvotes/downvotes: [79 17] "There should be an /r/alanismorissette where it's just examples of people using "ironic" incorrectly" upvotes/downvotes: [33 6]
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular submission's upvote/downvote pair.
import pymc3 as pm def posterior_upvote_ratio( upvotes, downvotes, samples = 20000): """ This function accepts the number of upvotes and downvotes a particular submission recieved, and the number of posterior samples to return to the user. Assumes a uniform prior. """ N = upvotes + downvotes with pm.Model() as model: upvote_ratio = pm.Uniform("upvote_ratio", 0, 1) observations = pm.Binomial( "obs", N, upvote_ratio, observed=upvotes) trace = pm.sample(samples, step=pm.Metropolis()) burned_trace = trace[int(samples/4):] return burned_trace["upvote_ratio"]
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Below are the resulting posterior distributions.
figsize( 11., 8) posteriors = [] colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"] for i in range(len(submissions)): j = submissions[i] posteriors.append( posterior_upvote_ratio( votes[j, 0], votes[j,1] ) ) plt.hist( posteriors[i], bins = 10, normed = True, alpha = .9, histtype="step",color = colours[i%5], lw = 3, label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) ) plt.hist( posteriors[i], bins = 10, normed = True, alpha = .2, histtype="stepfilled",color = colours[i], lw = 3, ) plt.legend(loc="upper left") plt.xlim( 0, 1) plt.title("Posterior distributions of upvote ratios on different submissions");
Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model. [-------100%-------] 20000 of 20000 in 1.4 sec. | SPS: 14595.5 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model. [-------100%-------] 20000 of 20000 in 1.3 sec. | SPS: 15189.5 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model. [-------100%-------] 20000 of 20000 in 1.3 sec. | SPS: 15429.0 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model. [-------100%-------] 20000 of 20000 in 1.3 sec. | SPS: 15146.5 | ETA: 0.0
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be. Sorting!We have been ignoring the goal of this exercise: how do we sort the submissions from *best to worst*? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions.I suggest using the *95% least plausible value*, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted:
N = posteriors[0].shape[0] lower_limits = [] for i in range(len(submissions)): j = submissions[i] plt.hist( posteriors[i], bins = 20, normed = True, alpha = .9, histtype="step",color = colours[i], lw = 3, label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) ) plt.hist( posteriors[i], bins = 20, normed = True, alpha = .2, histtype="stepfilled",color = colours[i], lw = 3, ) v = np.sort( posteriors[i] )[ int(0.05*N) ] #plt.vlines( v, 0, 15 , color = "k", alpha = 1, linewidths=3 ) plt.vlines( v, 0, 10 , color = colours[i], linestyles = "--", linewidths=3 ) lower_limits.append(v) plt.legend(loc="upper left") plt.legend(loc="upper left") plt.title("Posterior distributions of upvote ratios on different submissions"); order = np.argsort( -np.array( lower_limits ) ) print(order, lower_limits)
[1 0 2 3] [0.80034320917496615, 0.94092009444598201, 0.74660503350561902, 0.72190353389632911]
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
The best submissions, according to our procedure, are the submissions that are *most-likely* to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. That is, even in the worst case scenario, when we have severely overestimated the upvote ratio, we can be sure the best submissions are still on top. Under this ordering, we impose the following very natural properties:1. given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio).2. given two submissions with the same number of votes, we still assign the submission with more upvotes as *better*. But this is too slow for real-time!I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast.$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$where \begin{align}& a = 1 + u \\\\& b = 1 + d \\\\\end{align}$u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail.
def intervals(u,d): a = 1. + u b = 1. + d mu = a/(a+b) std_err = 1.65*np.sqrt( (a*b)/( (a+b)**2*(a+b+1.) ) ) return ( mu, std_err ) print("Approximate lower bounds:") posterior_mean, std_err = intervals(votes[:,0],votes[:,1]) lb = posterior_mean - std_err print(lb) print("\n") print("Top 40 Sorted according to approximate lower bounds:") print("\n") order = np.argsort( -lb ) ordered_contents = [] for i in order[:40]: ordered_contents.append( contents[i] ) print(votes[i,0], votes[i,1], contents[i]) print("-------------")
Approximate lower bounds: [ 0.93349005 0.9532194 0.94149718 0.90859764 0.88705356 0.8558795 0.85644927 0.93752679 0.95767101 0.91131012 0.910073 0.915999 0.9140058 0.83276025 0.87593961 0.87436674 0.92830849 0.90642832 0.89187973 0.89950891 0.91295322 0.78607629 0.90250203 0.79950031 0.85219422 0.83703439 0.7619808 0.81301134 0.7313114 0.79137561 0.82701445 0.85542404 0.82309334 0.75211374 0.82934814 0.82674958 0.80933194 0.87448152 0.85350205 0.75460106 0.82934814 0.74417233 0.79924258 0.8189683 0.75460106 0.90744016 0.83838023 0.78802791 0.78400654 0.64638659 0.62047936 0.76137738 0.81365241 0.83838023 0.78457533 0.84980627 0.79249393 0.69020315 0.69593922 0.70758151 0.70268831 0.91620627 0.73346864 0.86382644 0.80877728 0.72708753 0.79822085 0.68333632 0.81699014 0.65100453 0.79809005 0.74702492 0.77318569 0.83221179 0.66500492 0.68134548 0.7249286 0.59412132 0.58191312 0.73142963 0.73142963 0.66251028 0.87152685 0.74107856 0.60935684 0.87152685 0.77484517 0.88783675 0.81814153 0.54569789 0.6122496 0.75613569 0.53511973 0.74556767 0.81814153 0.85773646 0.6122496 0.64814153] Top 40 Sorted according to approximate lower bounds: 596 18 Someone should develop an AI specifically for reading Terms & Conditions and flagging dubious parts. ------------- 2360 98 Porn is the only industry where it is not only acceptable but standard to separate people based on race, sex and sexual preference. ------------- 1918 101 All polls are biased towards people who are willing to take polls ------------- 948 50 They should charge less for drinks in the drive-thru because you can't refill them. ------------- 3740 239 When I was in elementary school and going through the DARE program, I was positive a gang of older kids was going to corner me and force me to smoke pot. Then I became an adult and realized nobody is giving free drugs to somebody that doesn't want them. ------------- 166 7 "Noted" is the professional way of saying "K". ------------- 29 0 Rewatching Mr. Bean, I've realised that the character is an eccentric genius and not a blithering idiot. ------------- 289 18 You've been doing weird cameos in your friends' dreams since kindergarten. ------------- 269 17 At some point every parent has stopped wiping their child's butt and hoped for the best. ------------- 121 6 Is it really fair to say a person over 85 has heart failure? Technically, that heart has done exceptionally well. ------------- 535 40 It's surreal to think that the sun and moon and stars we gaze up at are the same objects that have been observed for millenia, by everyone in the history of humanity from cavemen to Aristotle to Jesus to George Washington. ------------- 527 40 I wonder if America's internet is censored in a similar way that North Korea's is, but we have no idea of it happening. ------------- 1510 131 Kenny's family is poor because they're always paying for his funeral. ------------- 43 1 If I was as careful with my whole paycheck as I am with my last $20 I'd be a whole lot better off ------------- 162 10 Black hair ties are probably the most popular bracelets in the world. ------------- 107 6 The best answer to the interview question "What is your greatest weakness?" is "interviews". ------------- 127 8 Surfing the internet without ads feels like a summer evening without mosquitoes ------------- 159 12 I wonder if Superman ever put a pair of glasses on Lois Lane's dog, and she was like "what's this Clark? Did you get me a new dog?" ------------- 21 0 Sitting on a cold toilet seat or a warm toilet seat both suck for different reasons. ------------- 1414 157 My life is really like Rihanna's song, "just work work work work work" and the rest of it I can't really understand. ------------- 222 22 I'm honestly slightly concerned how often Reddit commenters make me laugh compared to my real life friends. ------------- 52 3 The world must have been a spookier place altogether when candles and gas lamps were the only sources of light at night besides the moon and the stars. ------------- 194 19 I have not been thankful enough in the last few years that the Black Eyed Peas are no longer ever on the radio ------------- 18 0 Living on the coast is having the window seat of the land you live on. ------------- 18 0 Binoculars are like walkie talkies for the deaf. ------------- 28 1 Now that I am a parent of multiple children I have realized that my parents were lying through their teeth when they said they didn't have a favorite. ------------- 16 0 I sneer at people who read tabloids, but every time I look someone up on Wikipedia the first thing I look for is what controversies they've been involved in. ------------- 1559 233 Kid's menus at restaurants should be smaller portions of the same adult dishes at lower prices and not the junk food that they usually offer. ------------- 1426 213 Eventually once all phones are waterproof we'll be able to push people into pools again ------------- 61 5 Myspace is so outdated that jokes about it being outdated has become outdated ------------- 52 4 As a kid, seeing someone step on a banana peel and not slip was a disappointment. ------------- 90 9 Yahoo!® is the RadioShack® of the Internet. ------------- 34 2 People who "tell it like it is" rarely do so to say something nice ------------- 39 3 Closing your eyes after turning off your alarm is a very dangerous game. ------------- 39 3 Your known 'first word' is the first word your parents heard you speak. In reality, it may have been a completely different word you said when you were alone. ------------- 87 10 "Smells Like Teen Spirit" is as old to listeners of today as "Yellow Submarine" was to listeners of 1991. ------------- 239 36 if an ocean didnt stop immigrants from coming to America what makes us think a wall will? ------------- 22 1 The phonebook was the biggest invasion of privacy that everyone was oddly ok with. ------------- 57 6 I'm actually the most productive when I procrastinate because I'm doing everything I possibly can to avoid the main task at hand. ------------- 57 6 You will never feel how long time is until you have allergies and snot slowly dripping out of your nostrils, while sitting in a classroom with no tissues. -------------
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.
r_order = order[::-1][-40:] plt.errorbar( posterior_mean[r_order], np.arange( len(r_order) ), xerr=std_err[r_order], capsize=0, fmt="o", color = "#7A68A6") plt.xlim( 0.3, 1) plt.yticks( np.arange( len(r_order)-1,-1,-1 ), map( lambda x: x[:30].replace("\n",""), ordered_contents) );
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
In the graphic above, you can see why sorting by mean would be sub-optimal. Extension to Starred rating systemsThe above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two perfect ratings would beat an item with thousands of perfect ratings, but a single sub-perfect rating. We can consider the upvote-downvote problem above as binary: 0 is a downvote, 1 if an upvote. A $N$-star rating system can be seen as a more continuous version of above, and we can set $n$ stars rewarded is equivalent to rewarding $\frac{n}{N}$. For example, in a 5-star system, a 2 star rating corresponds to 0.4. A perfect rating is a 1. We can use the same formula as before, but with $a,b$ defined differently:$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$where \begin{align}& a = 1 + S \\\\& b = 1 + N - S \\\\\end{align}where $N$ is the number of users who rated, and $S$ is the sum of all the ratings, under the equivalence scheme mentioned above. Example: Counting Github starsWhat is the average number of stars a Github repository has? How would you calculate this? There are over 6 million respositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO ConclusionWhile the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering *how the data is shaped*. 1. By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter).2. Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable.3. There are major implications of not considering the sample size, and trying to sort objects that are unstable leads to pathological orderings. The method provided above solves this problem. Appendix Derivation of sorting submissions formulaBasically what we are doing is using a Beta prior (with parameters $a=1, b=1$, which is a uniform distribution), and using a Binomial likelihood with observations $u, N = u+d$. This means our posterior is a Beta distribution with parameters $a' = 1 + u, b' = 1 + (N - u) = 1+d$. We then need to find the value, $x$, such that 0.05 probability is less than $x$. This is usually done by inverting the CDF ([Cumulative Distribution Function](http://en.wikipedia.org/wiki/Cumulative_Distribution_Function)), but the CDF of the beta, for integer parameters, is known but is a large sum [3]. We instead use a Normal approximation. The mean of the Beta is $\mu = a'/(a'+b')$ and the variance is $$\sigma^2 = \frac{a'b'}{ (a' + b')^2(a'+b'+1) }$$Hence we solve the following equation for $x$ and have an approximate lower bound. $$ 0.05 = \Phi\left( \frac{(x - \mu)}{\sigma}\right) $$ $\Phi$ being the [cumulative distribution for the normal distribution](http://en.wikipedia.org/wiki/Normal_distributionCumulative_distribution) Exercises1\. How would you estimate the quantity $E\left[ \cos{X} \right]$, where $X \sim \text{Exp}(4)$? What about $E\left[ \cos{X} | X \lt 1\right]$, i.e. the expected value *given* we know $X$ is less than 1? Would you need more samples than the original samples size to be equally accurate?
## Enter code here import scipy.stats as stats exp = stats.expon( scale=4 ) N = 1e5 X = exp.rvs( int(N) ) ## ...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
2\. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by their percent of non-misses. What mistake have the researchers made?----- Kicker Careers Ranked by Make PercentageRank Kicker Make % Number of Kicks1 Garrett Hartley 87.7 572 Matt Stover 86.8 3353 Robbie Gould 86.2 2244 Rob Bironas 86.1 2235 Shayne Graham 85.4 254… … … 51 Dave Rayner 72.2 9052 Nick Novak 71.9 6453 Tim Seder 71.0 6254 Jose Cortez 70.7 7555 Wade Richey 66.1 56 In August 2013, [a popular post](http://bpodgursky.wordpress.com/2013/08/21/average-income-per-programming-language/) on the average income per programmer of different languages was trending. Here's the summary chart: (reproduced without permission, cause when you lie with stats, you gunna get the hammer). What do you notice about the extremes?------ Average household income by programming language LanguageAverage Household Income ($)Data Points Puppet87,589.29112 Haskell89,973.82191 PHP94,031.19978 CoffeeScript94,890.80435 VimL94,967.11532 Shell96,930.54979 Lua96,930.69101 Erlang97,306.55168 Clojure97,500.00269 Python97,578.872314 JavaScript97,598.753443 Emacs Lisp97,774.65355 C97,823.31665 Ruby98,238.743242 C++99,147.93845 CSS99,881.40527 Perl100,295.45990 C100,766.512120 Go101,158.01231 Scala101,460.91243 ColdFusion101,536.70109 Objective-C101,801.60562 Groovy102,650.86116 Java103,179.391402 XSLT106,199.19123 ActionScript108,119.47113 References1. Wainer, Howard. *The Most Dangerous Equation*. American Scientist, Volume 95.2. Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. [Web](http://www.sloansportsconference.com/wp-content/uploads/2013/Going%20for%20Three%20Predicting%20the%20Likelihood%20of%20Field%20Goal%20Success%20with%20Logistic%20Regression.pdf). 20 Feb. 2013.3. http://en.wikipedia.org/wiki/Beta_functionIncomplete_beta_function
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow.
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform
_____no_output_____
MIT
experiments/baseline_ptn/wisig/trials/2/trial.ipynb
stevester94/csc500-notebooks
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "labels_source", "labels_target", "domains_source", "domains_target", "num_examples_per_domain_per_label_source", "num_examples_per_domain_per_label_target", "n_shot", "n_way", "n_query", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_transforms_source", "x_transforms_target", "episode_transforms_source", "episode_transforms_target", "pickle_name", "x_net", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "torch_default_dtype" } standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.0001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["num_examples_per_domain_per_label_source"]=100 standalone_parameters["num_examples_per_domain_per_label_target"]=100 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 100 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "target_accuracy" standalone_parameters["x_transforms_source"] = ["unit_power"] standalone_parameters["x_transforms_target"] = ["unit_power"] standalone_parameters["episode_transforms_source"] = [] standalone_parameters["episode_transforms_target"] = [] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # uncomment for CORES dataset from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) standalone_parameters["labels_source"] = ALL_NODES standalone_parameters["labels_target"] = ALL_NODES standalone_parameters["domains_source"] = [1] standalone_parameters["domains_target"] = [2,3,4,5] standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl" # Uncomment these for ORACLE dataset # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS # standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS # standalone_parameters["domains_source"] = [8,20, 38,50] # standalone_parameters["domains_target"] = [14, 26, 32, 44, 56] # standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl" # standalone_parameters["num_examples_per_domain_per_label_source"]=1000 # standalone_parameters["num_examples_per_domain_per_label_target"]=1000 # Uncomment these for Metahan dataset # standalone_parameters["labels_source"] = list(range(19)) # standalone_parameters["labels_target"] = list(range(19)) # standalone_parameters["domains_source"] = [0] # standalone_parameters["domains_target"] = [1] # standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl" # standalone_parameters["n_way"] = len(standalone_parameters["labels_source"]) # standalone_parameters["num_examples_per_domain_per_label_source"]=200 # standalone_parameters["num_examples_per_domain_per_label_target"]=100 standalone_parameters["n_way"] = len(standalone_parameters["labels_source"]) # Parameters parameters = { "experiment_name": "baseline_ptn_wisig", "lr": 0.001, "device": "cuda", "seed": 1337, "dataset_seed": 1337, "labels_source": [ "1-10", "1-12", "1-14", "1-16", "1-18", "1-19", "1-8", "10-11", "10-17", "10-4", "10-7", "11-1", "11-10", "11-19", "11-20", "11-4", "11-7", "12-19", "12-20", "12-7", "13-14", "13-18", "13-19", "13-20", "13-3", "13-7", "14-10", "14-11", "14-12", "14-13", "14-14", "14-19", "14-20", "14-7", "14-8", "14-9", "15-1", "15-19", "15-6", "16-1", "16-16", "16-19", "16-20", "17-10", "17-11", "18-1", "18-10", "18-11", "18-12", "18-13", "18-14", "18-15", "18-16", "18-17", "18-19", "18-2", "18-20", "18-4", "18-5", "18-7", "18-8", "18-9", "19-1", "19-10", "19-11", "19-12", "19-13", "19-14", "19-15", "19-19", "19-2", "19-20", "19-3", "19-4", "19-6", "19-7", "19-8", "19-9", "2-1", "2-13", "2-15", "2-3", "2-4", "2-5", "2-6", "2-7", "2-8", "20-1", "20-12", "20-14", "20-15", "20-16", "20-18", "20-19", "20-20", "20-3", "20-4", "20-5", "20-7", "20-8", "3-1", "3-13", "3-18", "3-2", "3-8", "4-1", "4-10", "4-11", "5-1", "5-5", "6-1", "6-15", "6-6", "7-10", "7-11", "7-12", "7-13", "7-14", "7-7", "7-8", "7-9", "8-1", "8-13", "8-14", "8-18", "8-20", "8-3", "8-8", "9-1", "9-7", ], "labels_target": [ "1-10", "1-12", "1-14", "1-16", "1-18", "1-19", "1-8", "10-11", "10-17", "10-4", "10-7", "11-1", "11-10", "11-19", "11-20", "11-4", "11-7", "12-19", "12-20", "12-7", "13-14", "13-18", "13-19", "13-20", "13-3", "13-7", "14-10", "14-11", "14-12", "14-13", "14-14", "14-19", "14-20", "14-7", "14-8", "14-9", "15-1", "15-19", "15-6", "16-1", "16-16", "16-19", "16-20", "17-10", "17-11", "18-1", "18-10", "18-11", "18-12", "18-13", "18-14", "18-15", "18-16", "18-17", "18-19", "18-2", "18-20", "18-4", "18-5", "18-7", "18-8", "18-9", "19-1", "19-10", "19-11", "19-12", "19-13", "19-14", "19-15", "19-19", "19-2", "19-20", "19-3", "19-4", "19-6", "19-7", "19-8", "19-9", "2-1", "2-13", "2-15", "2-3", "2-4", "2-5", "2-6", "2-7", "2-8", "20-1", "20-12", "20-14", "20-15", "20-16", "20-18", "20-19", "20-20", "20-3", "20-4", "20-5", "20-7", "20-8", "3-1", "3-13", "3-18", "3-2", "3-8", "4-1", "4-10", "4-11", "5-1", "5-5", "6-1", "6-15", "6-6", "7-10", "7-11", "7-12", "7-13", "7-14", "7-7", "7-8", "7-9", "8-1", "8-13", "8-14", "8-18", "8-20", "8-3", "8-8", "9-1", "9-7", ], "x_transforms_source": [], "x_transforms_target": [], "episode_transforms_source": [], "episode_transforms_target": [], "num_examples_per_domain_per_label_source": 100, "num_examples_per_domain_per_label_target": 100, "n_shot": 3, "n_way": 130, "n_query": 2, "train_k_factor": 1, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float64", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_loss", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "pickle_name": "wisig.node3-19.stratified_ds.2022A.pkl", "domains_source": [3], "domains_target": [1, 2, 4], } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG # (This is due to the randomized initial weights) ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() ################################### # Build the dataset ################################### if p.x_transforms_source == []: x_transform_source = None else: x_transform_source = get_chained_transform(p.x_transforms_source) if p.x_transforms_target == []: x_transform_target = None else: x_transform_target = get_chained_transform(p.x_transforms_target) if p.episode_transforms_source == []: episode_transform_source = None else: raise Exception("episode_transform_source not implemented") if p.episode_transforms_target == []: episode_transform_target = None else: raise Exception("episode_transform_target not implemented") eaf_source = Episodic_Accessor_Factory( labels=p.labels_source, domains=p.domains_source, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name), x_transform_func=x_transform_source, example_transform_func=episode_transform_source, ) train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test() eaf_target = Episodic_Accessor_Factory( labels=p.labels_target, domains=p.domains_target, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name), x_transform_func=x_transform_target, example_transform_func=episode_transform_target, ) train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test() transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) # Some quick unit tests on the data from steves_utils.transforms import get_average_power, get_average_magnitude q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source)) assert q_x.dtype == eval(p.torch_default_dtype) assert s_x.dtype == eval(p.torch_default_dtype) print("Visually inspect these to see if they line up with expected values given the transforms") print('x_transforms_source', p.x_transforms_source) print('x_transforms_target', p.x_transforms_target) print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy())) print("Average power, source:", get_average_power(q_x[0].numpy())) q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target)) print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy())) print("Average power, target:", get_average_power(q_x[0].numpy())) ################################### # Build the model ################################### model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment)
_____no_output_____
MIT
experiments/baseline_ptn/wisig/trials/2/trial.ipynb
stevester94/csc500-notebooks
Neural Networks===============Neural networks can be constructed using the ``torch.nn`` package.Now that you had a glimpse of ``autograd``, ``nn`` depends on``autograd`` to define models and differentiate them.An ``nn.Module`` contains layers, and a method ``forward(input)`` thatreturns the ``output``.For example, look at this network that classifies digit images:.. figure:: /_static/img/mnist.png :alt: convnet convnetIt is a simple feed-forward network. It takes the input, feeds itthrough several layers one after the other, and then finally gives theoutput.A typical training procedure for a neural network is as follows:- Define the neural network that has some learnable parameters (or weights)- Iterate over a dataset of inputs- Process input through the network- Compute the loss (how far is the output from being correct)- Propagate gradients back into the network’s parameters- Update the weights of the network, typically using a simple update rule: ``weight = weight - learning_rate * gradient``Define the network------------------Let’s define this network:
import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square, you can specify with a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() print(net)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
You just have to define the ``forward`` function, and the ``backward``function (where gradients are computed) is automatically defined for youusing ``autograd``.You can use any of the Tensor operations in the ``forward`` function.The learnable parameters of a model are returned by ``net.parameters()``
params = list(net.parameters()) print(len(params)) print(params[0].size()) # conv1's .weight
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Let's try a random 32x32 input.Note: expected input size of this net (LeNet) is 32x32. To use this net onthe MNIST dataset, please resize the images from the dataset to 32x32.
input = torch.randn(1, 1, 32, 32) out = net(input) print(out)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Zero the gradient buffers of all parameters and backprops with randomgradients:
net.zero_grad() out.backward(torch.randn(1, 10))
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Note``torch.nn`` only supports mini-batches. The entire ``torch.nn`` package only supports inputs that are a mini-batch of samples, and not a single sample. For example, ``nn.Conv2d`` will take in a 4D Tensor of ``nSamples x nChannels x Height x Width``. If you have a single sample, just use ``input.unsqueeze(0)`` to add a fake batch dimension.Before proceeding further, let's recap all the classes you’ve seen so far.**Recap:** - ``torch.Tensor`` - A *multi-dimensional array* with support for autograd operations like ``backward()``. Also *holds the gradient* w.r.t. the tensor. - ``nn.Module`` - Neural network module. *Convenient way of encapsulating parameters*, with helpers for moving them to GPU, exporting, loading, etc. - ``nn.Parameter`` - A kind of Tensor, that is *automatically registered as a parameter when assigned as an attribute to a* ``Module``. - ``autograd.Function`` - Implements *forward and backward definitions of an autograd operation*. Every ``Tensor`` operation creates at least a single ``Function`` node that connects to functions that created a ``Tensor`` and *encodes its history*.**At this point, we covered:** - Defining a neural network - Processing inputs and calling backward**Still Left:** - Computing the loss - Updating the weights of the networkLoss Function-------------A loss function takes the (output, target) pair of inputs, and computes avalue that estimates how far away the output is from the target.There are several different`loss functions `_ under thenn package .A simple loss is: ``nn.MSELoss`` which computes the mean-squared errorbetween the input and the target.For example:
output = net(input) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() loss = criterion(output, target) print(loss)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Now, if you follow ``loss`` in the backward direction, using its``.grad_fn`` attribute, you will see a graph of computations that lookslike this::: input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> flatten -> linear -> relu -> linear -> relu -> linear -> MSELoss -> lossSo, when we call ``loss.backward()``, the whole graph is differentiatedw.r.t. the neural net parameters, and all Tensors in the graph that have``requires_grad=True`` will have their ``.grad`` Tensor accumulated with thegradient.For illustration, let us follow a few steps backward:
print(loss.grad_fn) # MSELoss print(loss.grad_fn.next_functions[0][0]) # Linear print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Backprop--------To backpropagate the error all we have to do is to ``loss.backward()``.You need to clear the existing gradients though, else gradients will beaccumulated to existing gradients.Now we shall call ``loss.backward()``, and have a look at conv1's biasgradients before and after the backward.
net.zero_grad() # zeroes the gradient buffers of all parameters print('conv1.bias.grad before backward') print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward') print(net.conv1.bias.grad)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Now, we have seen how to use loss functions.**Read Later:** The neural network package contains various modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is `here `_.**The only thing left to learn is:** - Updating the weights of the networkUpdate the weights------------------The simplest update rule used in practice is the Stochastic GradientDescent (SGD): ``weight = weight - learning_rate * gradient``We can implement this using simple Python code:.. code:: python learning_rate = 0.01 for f in net.parameters(): f.data.sub_(f.grad.data * learning_rate)However, as you use neural networks, you want to use various differentupdate rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.To enable this, we built a small package: ``torch.optim`` thatimplements all these methods. Using it is very simple:
import torch.optim as optim # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() optimizer.step() # Does the update
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.First off, let's load the dataset through torchvision.
import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to /Users/Mia/.pytorch/F_MNIST_data/FashionMNIST/raw/train-images-idx3-ubyte.gz
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
Here we can see one of the images.
image, label = next(iter(trainloader)) helper.imshow(image[0,:]);
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
from torch import nn, optim import torch.nn.functional as F # TODO: Define your network architecture here class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): #flatten inputs x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim = 1) return x
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write the training code. Remember the training pass is a fairly straightforward process:* Make a forward pass through the network to get the logits * Use the logits to calculate the loss* Perform a backward pass through the network with `loss.backward()` to calculate the gradients* Take a step with the optimizer to update the weightsBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
# TODO: Create the network, define the criterion and optimizer model = Network() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) model # TODO: Train the network here # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: log_ps = model(images) loss = criterion(log_ps, labels) ## zero grads reset them optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() else: print("Epoch: ", e) print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.resize_(1, 784) # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) # Plot the image and probabilities helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
----------------- Please run the IPython Widget below. Using the checkboxes, you can:* Download the training, validation and test datasets* Extract all tarfiles* Create the necessary PyTorch files for the training/validation/test datasets. We create 1 file for each datanet sample, resulting in exactly * ./dataset/converted_train/: 120,000 .pt files (~29.9 GB) * ./dataset/converted_val/: 3,120 .pt files (~14.0 GB) * ./dataset/converted_test/: 1,560 .pt files (~6.7 GB) * You can select how many processes to use. Default is 4. More processes = faster runtime due to parallelism, but also multiplies the amount of RAM utilized.* Downloaded .gz files are not deleted, free these up manually if you need some space-------------------------------------------------------------
from convertDataset import process_in_parallel, download_dataset, extract_tarfiles import ipywidgets as widgets cbs = [widgets.Checkbox() for i in range(5)] cbs[0].description="Download dataset" cbs[1].description="Extract Tarfiles" cbs[2].description="Generate Pytorch Files - Training" cbs[3].description="Generate Pytorch Files - Validation" cbs[4].description="Generate Pytorch Files - Test" sl = widgets.IntSlider( value=4, min=0, max=16, step=1, style= {'description_width': 'initial'}, layout=widgets.Layout(width='100%',height='80px'), description='#processes to use (higher = more parallelism, uses up more RAM)', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='d' ) pb = widgets.Button( description='Run', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Run', ) def on_button_clicked(b): if cbs[0].value: print("Downloading dataset...") download_dataset() if cbs[1].value: print("Extracting Tarfiles...") extract_tarfiles() if cbs[2].value: print("Creating pytorch files (training)...") process_in_parallel('train',sl.value) if cbs[3].value: print("Creating pytorch files (validation)...") process_in_parallel('validation',sl.value) if cbs[4].value: print("Creating pytorch files (test)...") process_in_parallel('test',sl.value) pb.on_click(on_button_clicked) ui = widgets.VBox([widgets.HBox([x]) for x in cbs+[sl]] +[pb]) display(ui)
_____no_output_____
MIT
1) Download dataset, create .pt files.ipynb
brunoklaus/PS-001-ML5G-GNNetworkingChallenge2021-PARANA
Importing Dependencies
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pandas_datareader import pandas_datareader.data as web import datetime from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense,LSTM,Dropout %matplotlib inline
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Importing Data
start = datetime.datetime(2016,1,1) end = datetime.datetime(2021,1,1) QQQ = web.DataReader("QQQ", "yahoo", start, end) QQQ.head() QQQ['Close'].plot(label = 'QQQ', figsize = (16,10), title = 'Closing Price') plt.legend(); QQQ['Volume'].plot(label = 'QQQ', figsize = (16,10), title = 'Volume Traded') plt.legend(); QQQ['MA50'] = QQQ['Close'].rolling(50).mean() QQQ['MA200'] = QQQ['Close'].rolling(200).mean() QQQ[['Close','MA50','MA200']].plot(figsize = (16,10), title = 'Moving Averages')
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Selecting The Close Column
QQQ["Close"]=pd.to_numeric(QQQ.Close,errors='coerce') #turning the Close column to numeric QQQ = QQQ.dropna() trainData = QQQ.iloc[:,3:4].values #selecting closing prices for training
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Scaling Values in the Range of 0-1 for Best Results
sc = MinMaxScaler(feature_range=(0,1)) trainData = sc.fit_transform(trainData) trainData.shape
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Prepping Data for LSTM
X_train = [] y_train = [] for i in range (60,1060): X_train.append(trainData[i-60:i,0]) y_train.append(trainData[i,0]) X_train,y_train = np.array(X_train),np.array(y_train) X_train = np.reshape(X_train,(X_train.shape[0],X_train.shape[1],1)) #adding the batch_size axis X_train.shape
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Building The Model
model = Sequential() model.add(LSTM(units=100, return_sequences = True, input_shape =(X_train.shape[1],1))) model.add(Dropout(0.2)) model.add(LSTM(units=100, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units=100, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units=100, return_sequences = False)) model.add(Dropout(0.2)) model.add(Dense(units =1)) model.compile(optimizer='adam',loss="mean_squared_error") hist = model.fit(X_train, y_train, epochs = 20, batch_size = 32, verbose=2)
Epoch 1/20 32/32 - 26s - loss: 0.0187 Epoch 2/20 32/32 - 3s - loss: 0.0036 Epoch 3/20 32/32 - 3s - loss: 0.0026 Epoch 4/20 32/32 - 3s - loss: 0.0033 Epoch 5/20 32/32 - 3s - loss: 0.0033 Epoch 6/20 32/32 - 3s - loss: 0.0028 Epoch 7/20 32/32 - 3s - loss: 0.0024 Epoch 8/20 32/32 - 3s - loss: 0.0024 Epoch 9/20 32/32 - 3s - loss: 0.0030 Epoch 10/20 32/32 - 3s - loss: 0.0026 Epoch 11/20 32/32 - 3s - loss: 0.0020 Epoch 12/20 32/32 - 3s - loss: 0.0018 Epoch 13/20 32/32 - 3s - loss: 0.0024 Epoch 14/20 32/32 - 3s - loss: 0.0020 Epoch 15/20 32/32 - 3s - loss: 0.0029 Epoch 16/20 32/32 - 4s - loss: 0.0022 Epoch 17/20 32/32 - 3s - loss: 0.0016 Epoch 18/20 32/32 - 3s - loss: 0.0029 Epoch 19/20 32/32 - 3s - loss: 0.0021 Epoch 20/20 32/32 - 3s - loss: 0.0015
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Plotting The Training Loss
plt.plot(hist.history['loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train'], loc='upper left') plt.show()
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Testing Model on New Data
start = datetime.datetime(2021,1,1) end = datetime.datetime.today() testData = web.DataReader("QQQ", "yahoo", start, end) #importing new data for testing testData["Close"]=pd.to_numeric(testData.Close,errors='coerce') #turning the Close column to numeric testData = testData.dropna() #droping the NA values testData = testData.iloc[:,3:4] #selecting the closing prices for testing y_test = testData.iloc[60:,0:].values #selecting the labels #input array for the model inputClosing = testData.iloc[:,0:].values inputClosing_scaled = sc.transform(inputClosing) inputClosing_scaled.shape X_test = [] length = len(testData) timestep = 60 for i in range(timestep,length): X_test.append(inputClosing_scaled[i-timestep:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test,(X_test.shape[0],X_test.shape[1],1)) X_test.shape y_pred = model.predict(X_test) #predicting values predicted_price = sc.inverse_transform(y_pred) #inversing the scaling transformation for plotting
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Plotting Results
plt.plot(y_test, color = 'blue', label = 'Actual Stock Price') plt.plot(predicted_price, color = 'red', label = 'Predicted Stock Price') plt.title('QQQ stock price prediction') plt.xlabel('Time') plt.ylabel('Stock Price') plt.legend() plt.show()
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Boolean Operator
print (10>9) print (10==9) print (10<9) x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool("Hello")) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(0)) print(bool([]))
True True True True False False False
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Functions can return Boolean
def myfunctionboolean(): return True print(myfunctionboolean()) def myfunction(): return False if myfunction(): print("yes!") else: print("no")
no
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
You Try
print(10>9) a=6 b=7 print(a==b) print(a!=a)
True False False
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Arithmetic Operators
print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3x3=9+1 print(10**5)
15 5 50 2.0 0 2 3 1 100000
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Bitwise Operators
a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0011 carry flag bit=01
12 61 49 -61 120 240 6 3
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Phyton Assigment Operators
a+=3 #Same As a = a + 3 #Same As a = 60 + 3, a=63 print(a)
63
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Logical Operators
#and logical operators a = True b = False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b)
False True
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Hyperparameter tuning with Cloud AI Platform **Learning Objectives:** * Improve the accuracy of a model by hyperparameter tuning
import os PROJECT = 'qwiklabs-gcp-faf328caac1ef9a0' # REPLACE WITH YOUR PROJECT ID BUCKET = 'qwiklabs-gcp-faf328caac1ef9a0' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-east1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # for bash os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['TFVERSION'] = '1.8' # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION
Updated property [core/project]. Updated property [compute/region].
MIT
Coursera/Art and Science of Machine Learning/Improve model accuracy by hyperparameter tuning with AI Platform.ipynb
helpthx/Path_through_Data_Science_2019
Create command-line programIn order to submit to Cloud AI Platform, we need to create a distributed training program. Let's convert our housing example to fit that paradigm, using the Estimators API.
%%bash rm -rf house_prediction_module mkdir house_prediction_module mkdir house_prediction_module/trainer touch house_prediction_module/trainer/__init__.py %%writefile house_prediction_module/trainer/task.py import argparse import os import json import shutil from . import model if __name__ == '__main__' and "get_ipython" not in dir(): parser = argparse.ArgumentParser() parser.add_argument( '--learning_rate', type = float, default = 0.01 ) parser.add_argument( '--batch_size', type = int, default = 30 ) parser.add_argument( '--output_dir', help = 'GCS location to write checkpoints and export models.', required = True ) parser.add_argument( '--job-dir', help = 'this model ignores this field, but it is required by gcloud', default = 'junk' ) args = parser.parse_args() arguments = args.__dict__ # Unused args provided by service arguments.pop('job_dir', None) arguments.pop('job-dir', None) # Append trial_id to path if we are doing hptuning # This code can be removed if you are not using hyperparameter tuning arguments['output_dir'] = os.path.join( arguments['output_dir'], json.loads( os.environ.get('TF_CONFIG', '{}') ).get('task', {}).get('trial', '') ) # Run the training shutil.rmtree(arguments['output_dir'], ignore_errors=True) # start fresh each time # Pass the command line arguments to our model's train_and_evaluate function model.train_and_evaluate(arguments) %%writefile house_prediction_module/trainer/model.py import numpy as np import pandas as pd import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) # Read dataset and split into train and eval df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep = ",") df['num_rooms'] = df['total_rooms'] / df['households'] np.random.seed(seed = 1) #makes split reproducible msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] # Train and eval input functions SCALE = 100000 def train_input_fn(df, batch_size): return tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = None, batch_size = batch_size, # note the batch size shuffle = True) def eval_input_fn(df, batch_size): return tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = batch_size, shuffle = False) # Define feature columns features = [tf.feature_column.numeric_column('num_rooms')] def train_and_evaluate(args): # Compute appropriate number of steps num_steps = (len(traindf) / args['batch_size']) / args['learning_rate'] # if learning_rate=0.01, hundred epochs # Create custom optimizer myopt = tf.train.FtrlOptimizer(learning_rate = args['learning_rate']) # note the learning rate # Create rest of the estimator as usual estimator = tf.estimator.LinearRegressor(model_dir = args['output_dir'], feature_columns = features, optimizer = myopt) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'], tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels * SCALE, pred_values * SCALE)} estimator = tf.contrib.estimator.add_metrics(estimator, rmse) train_spec = tf.estimator.TrainSpec(input_fn = train_input_fn(df = traindf, batch_size = args['batch_size']), max_steps = num_steps) eval_spec = tf.estimator.EvalSpec(input_fn = eval_input_fn(df = evaldf, batch_size = len(evaldf)), steps = None) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) %%bash rm -rf house_trained export PYTHONPATH=${PYTHONPATH}:${PWD}/house_prediction_module gcloud ai-platform local train \ --module-name=trainer.task \ --job-dir=house_trained \ --package-path=$(pwd)/trainer \ -- \ --batch_size=30 \ --learning_rate=0.02 \ --output_dir=house_trained
WARNING: Logging before flag parsing goes to stderr. W0809 20:42:02.240282 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:6: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead. W0809 20:42:02.240634 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:6: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead. W0809 20:42:02.410248 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:41: The name tf.train.FtrlOptimizer is deprecated. Please use tf.compat.v1.train.FtrlOptimizer instead. I0809 20:42:02.410758 139715572925888 run_config.py:528] TF_CONFIG environment variable: {u'environment': u'cloud', u'cluster': {}, u'job': {u'args': [u'--batch_size=30', u'--learning_rate=0.02', u'--output_dir=house_trained', u'--job-dir', u'house_trained'], u'job_name': u'trainer.task'}, u'task': {}} I0809 20:42:02.411099 139715572925888 estimator.py:1790] Using default config. I0809 20:42:02.412035 139715572925888 estimator.py:209] Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_global_id_in_cluster': 0, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f11d87aea50>, '_model_dir': 'house_trained/', '_protocol': None, '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_tf_random_seed': None, '_save_summary_steps': 100, '_device_fn': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_experimental_max_worker_delay_secs': None, '_evaluation_master': '', '_eval_distribute': None, '_train_distribute': None, '_master': ''} W0809 20:42:03.567886 139715572925888 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. I0809 20:42:03.568871 139715572925888 estimator.py:209] Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_global_id_in_cluster': 0, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f11d06b4110>, '_model_dir': 'house_trained/', '_protocol': None, '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_tf_random_seed': None, '_save_summary_steps': 100, '_device_fn': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_experimental_max_worker_delay_secs': None, '_evaluation_master': '', '_eval_distribute': None, '_train_distribute': None, '_master': ''} W0809 20:42:03.569215 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:20: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead. W0809 20:42:03.569324 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:20: The name tf.estimator.inputs.pandas_input_fn is deprecated. Please use tf.compat.v1.estimator.inputs.pandas_input_fn instead. I0809 20:42:03.577970 139715572925888 estimator_training.py:186] Not using Distribute Coordinator. I0809 20:42:03.578327 139715572925888 training.py:612] Running training and evaluation locally (non-distributed). I0809 20:42:03.578629 139715572925888 training.py:700] Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600. W0809 20:42:03.585417 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/training_util.py:236: initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. W0809 20:42:03.600763 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: __init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. W0809 20:42:03.601921 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. I0809 20:42:03.610526 139715572925888 estimator.py:1145] Calling model_fn. I0809 20:42:03.610791 139715572925888 estimator.py:1145] Calling model_fn. W0809 20:42:03.937570 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/canned/linear.py:308: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. I0809 20:42:04.041716 139715572925888 estimator.py:1147] Done calling model_fn. I0809 20:42:04.041951 139715572925888 estimator.py:1147] Done calling model_fn. I0809 20:42:04.042213 139715572925888 basic_session_run_hooks.py:541] Create CheckpointSaverHook. W0809 20:42:04.096678 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py:1354: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where I0809 20:42:04.321468 139715572925888 monitored_session.py:240] Graph was finalized. 2019-08-09 20:42:04.321920: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: AVX2 FMA To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags. 2019-08-09 20:42:04.331591: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz 2019-08-09 20:42:04.332354: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5575845a21f0 executing computations on platform Host. Devices: 2019-08-09 20:42:04.332417: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined> 2019-08-09 20:42:04.332908: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. 2019-08-09 20:42:04.355002: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. I0809 20:42:04.382742 139715572925888 session_manager.py:500] Running local_init_op. I0809 20:42:04.388341 139715572925888 session_manager.py:502] Done running local_init_op. W0809 20:42:04.411591 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py:875: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. I0809 20:42:04.641765 139715572925888 basic_session_run_hooks.py:606] Saving checkpoints for 0 into house_trained/model.ckpt. I0809 20:42:04.843869 139715572925888 basic_session_run_hooks.py:262] loss = 215.66043, step = 1 W0809 20:42:04.956371 139715572925888 basic_session_run_hooks.py:724] It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 18 vs previous value: 18. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. W0809 20:42:05.048196 139715572925888 basic_session_run_hooks.py:724] It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 57 vs previous value: 57. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. W0809 20:42:05.063069 139715572925888 basic_session_run_hooks.py:724] It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 63 vs previous value: 63. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. W0809 20:42:05.076215 139715572925888 basic_session_run_hooks.py:724] It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 68 vs previous value: 68. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. W0809 20:42:05.114201 139715572925888 basic_session_run_hooks.py:724] It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 83 vs previous value: 83. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. I0809 20:42:05.157850 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 317.933 I0809 20:42:05.159033 139715572925888 basic_session_run_hooks.py:260] loss = 54.358315, step = 101 (0.315 sec) I0809 20:42:05.406924 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 401.551 I0809 20:42:05.408078 139715572925888 basic_session_run_hooks.py:260] loss = 42.23906, step = 201 (0.249 sec) I0809 20:42:05.646493 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 417.343 I0809 20:42:05.647548 139715572925888 basic_session_run_hooks.py:260] loss = 43.14472, step = 301 (0.239 sec) I0809 20:42:05.883550 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 421.896 I0809 20:42:05.884644 139715572925888 basic_session_run_hooks.py:260] loss = 54.47378, step = 401 (0.237 sec) I0809 20:42:06.129745 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 406.174 I0809 20:42:06.130978 139715572925888 basic_session_run_hooks.py:260] loss = 14.438426, step = 501 (0.246 sec) I0809 20:42:06.358213 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 437.652 I0809 20:42:06.359368 139715572925888 basic_session_run_hooks.py:260] loss = 57.73707, step = 601 (0.228 sec) I0809 20:42:06.575443 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 460.34 I0809 20:42:06.576554 139715572925888 basic_session_run_hooks.py:260] loss = 22.231636, step = 701 (0.217 sec) I0809 20:42:06.798619 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 448.171 I0809 20:42:06.800406 139715572925888 basic_session_run_hooks.py:260] loss = 54.715797, step = 801 (0.224 sec) I0809 20:42:07.043385 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 408.513 I0809 20:42:07.044590 139715572925888 basic_session_run_hooks.py:260] loss = 28.722849, step = 901 (0.244 sec) I0809 20:42:07.289911 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 405.596 I0809 20:42:07.290906 139715572925888 basic_session_run_hooks.py:260] loss = 26.559034, step = 1001 (0.246 sec) I0809 20:42:07.529155 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 418.015 I0809 20:42:07.530375 139715572925888 basic_session_run_hooks.py:260] loss = 1241.5792, step = 1101 (0.239 sec) I0809 20:42:07.781065 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 396.961 I0809 20:42:07.782162 139715572925888 basic_session_run_hooks.py:260] loss = 29.28805, step = 1201 (0.252 sec) I0809 20:42:08.019126 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 420.016 I0809 20:42:08.020097 139715572925888 basic_session_run_hooks.py:260] loss = 37.746925, step = 1301 (0.238 sec) I0809 20:42:08.244482 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 443.754 I0809 20:42:08.247842 139715572925888 basic_session_run_hooks.py:260] loss = 24.188057, step = 1401 (0.228 sec) I0809 20:42:08.498735 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 393.352 I0809 20:42:08.499910 139715572925888 basic_session_run_hooks.py:260] loss = 60.33488, step = 1501 (0.252 sec) I0809 20:42:08.721335 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 449.228 I0809 20:42:08.722378 139715572925888 basic_session_run_hooks.py:260] loss = 21.831383, step = 1601 (0.222 sec) I0809 20:42:08.941145 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 455.438 I0809 20:42:08.942961 139715572925888 basic_session_run_hooks.py:260] loss = 60.54083, step = 1701 (0.221 sec) I0809 20:42:09.173693 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 429.533 I0809 20:42:09.174632 139715572925888 basic_session_run_hooks.py:260] loss = 34.44056, step = 1801 (0.232 sec) I0809 20:42:09.421431 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 403.67 I0809 20:42:09.422734 139715572925888 basic_session_run_hooks.py:260] loss = 16.504276, step = 1901 (0.248 sec) I0809 20:42:09.635639 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 466.79 I0809 20:42:09.636759 139715572925888 basic_session_run_hooks.py:260] loss = 62.338196, step = 2001 (0.214 sec) I0809 20:42:09.856868 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 452.031 I0809 20:42:09.857850 139715572925888 basic_session_run_hooks.py:260] loss = 34.891525, step = 2101 (0.221 sec) I0809 20:42:10.070991 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 468.876 I0809 20:42:10.072278 139715572925888 basic_session_run_hooks.py:260] loss = 36.803764, step = 2201 (0.214 sec) I0809 20:42:10.292645 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 449.408 I0809 20:42:10.293653 139715572925888 basic_session_run_hooks.py:260] loss = 19.011322, step = 2301 (0.221 sec) I0809 20:42:10.504937 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 471.143 I0809 20:42:10.507251 139715572925888 basic_session_run_hooks.py:260] loss = 50.321453, step = 2401 (0.214 sec) I0809 20:42:10.739464 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 426.397 I0809 20:42:10.740521 139715572925888 basic_session_run_hooks.py:260] loss = 84.55872, step = 2501 (0.233 sec) I0809 20:42:10.979623 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 416.332 I0809 20:42:10.980529 139715572925888 basic_session_run_hooks.py:260] loss = 50.548977, step = 2601 (0.240 sec) I0809 20:42:11.199657 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 454.505 I0809 20:42:11.200752 139715572925888 basic_session_run_hooks.py:260] loss = 41.289875, step = 2701 (0.220 sec) I0809 20:42:11.416954 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 460.337 I0809 20:42:11.418008 139715572925888 basic_session_run_hooks.py:260] loss = 15.092587, step = 2801 (0.217 sec) I0809 20:42:11.651057 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 427.035 I0809 20:42:11.652159 139715572925888 basic_session_run_hooks.py:260] loss = 66.30819, step = 2901 (0.234 sec) I0809 20:42:11.894701 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 410.46 I0809 20:42:11.895662 139715572925888 basic_session_run_hooks.py:260] loss = 32.576336, step = 3001 (0.244 sec) I0809 20:42:12.132533 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 420.412 I0809 20:42:12.133589 139715572925888 basic_session_run_hooks.py:260] loss = 31.308903, step = 3101 (0.238 sec) I0809 20:42:12.339224 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 483.828 I0809 20:42:12.340104 139715572925888 basic_session_run_hooks.py:260] loss = 24.115883, step = 3201 (0.207 sec) I0809 20:42:12.568285 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 436.719 I0809 20:42:12.569576 139715572925888 basic_session_run_hooks.py:260] loss = 20.761528, step = 3301 (0.229 sec) I0809 20:42:12.798980 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 433.377 I0809 20:42:12.800106 139715572925888 basic_session_run_hooks.py:260] loss = 31.681124, step = 3401 (0.231 sec) I0809 20:42:13.030879 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 431.155 I0809 20:42:13.031781 139715572925888 basic_session_run_hooks.py:260] loss = 22.891483, step = 3501 (0.232 sec) I0809 20:42:13.280486 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 400.651 I0809 20:42:13.281569 139715572925888 basic_session_run_hooks.py:260] loss = 41.06246, step = 3601 (0.250 sec) I0809 20:42:13.519926 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 417.697 I0809 20:42:13.521043 139715572925888 basic_session_run_hooks.py:260] loss = 21.739857, step = 3701 (0.239 sec) I0809 20:42:13.749242 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 436.021 I0809 20:42:13.750402 139715572925888 basic_session_run_hooks.py:260] loss = 127.92703, step = 3801 (0.229 sec) I0809 20:42:13.978998 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 435.309 I0809 20:42:13.980202 139715572925888 basic_session_run_hooks.py:260] loss = 14.991419, step = 3901 (0.230 sec) I0809 20:42:14.206845 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 438.824 I0809 20:42:14.207771 139715572925888 basic_session_run_hooks.py:260] loss = 36.550327, step = 4001 (0.228 sec) I0809 20:42:14.428431 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 451.738 I0809 20:42:14.429389 139715572925888 basic_session_run_hooks.py:260] loss = 39.62497, step = 4101 (0.222 sec) I0809 20:42:14.659174 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 432.968 I0809 20:42:14.660362 139715572925888 basic_session_run_hooks.py:260] loss = 32.269123, step = 4201 (0.231 sec) I0809 20:42:14.895462 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 423.278 I0809 20:42:14.897689 139715572925888 basic_session_run_hooks.py:260] loss = 86.88386, step = 4301 (0.237 sec) I0809 20:42:15.108449 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 469.492 I0809 20:42:15.109358 139715572925888 basic_session_run_hooks.py:260] loss = 39.07176, step = 4401 (0.212 sec) I0809 20:42:15.340329 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 431.263 I0809 20:42:15.341247 139715572925888 basic_session_run_hooks.py:260] loss = 47.81019, step = 4501 (0.232 sec) I0809 20:42:15.562230 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 450.641 I0809 20:42:15.563405 139715572925888 basic_session_run_hooks.py:260] loss = 31.643383, step = 4601 (0.222 sec) I0809 20:42:15.806593 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 409.209 I0809 20:42:15.807668 139715572925888 basic_session_run_hooks.py:260] loss = 38.677456, step = 4701 (0.244 sec) I0809 20:42:16.040714 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 427.341 I0809 20:42:16.042851 139715572925888 basic_session_run_hooks.py:260] loss = 20.32229, step = 4801 (0.235 sec) I0809 20:42:16.278851 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 419.746 I0809 20:42:16.280090 139715572925888 basic_session_run_hooks.py:260] loss = 70.01985, step = 4901 (0.237 sec) I0809 20:42:16.520617 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 413.597 I0809 20:42:16.521884 139715572925888 basic_session_run_hooks.py:260] loss = 23.471846, step = 5001 (0.242 sec) I0809 20:42:16.759655 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 418.356 I0809 20:42:16.760668 139715572925888 basic_session_run_hooks.py:260] loss = 19.1656, step = 5101 (0.239 sec) I0809 20:42:16.991517 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 431.241 I0809 20:42:16.992455 139715572925888 basic_session_run_hooks.py:260] loss = 37.272438, step = 5201 (0.232 sec) I0809 20:42:17.241883 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 399.484 I0809 20:42:17.243081 139715572925888 basic_session_run_hooks.py:260] loss = 47.863422, step = 5301 (0.251 sec) I0809 20:42:17.475157 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 428.721 I0809 20:42:17.477113 139715572925888 basic_session_run_hooks.py:260] loss = 20.762028, step = 5401 (0.234 sec) I0809 20:42:17.706330 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 432.509 I0809 20:42:17.707281 139715572925888 basic_session_run_hooks.py:260] loss = 31.892517, step = 5501 (0.230 sec) I0809 20:42:17.922365 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 462.862 I0809 20:42:17.923373 139715572925888 basic_session_run_hooks.py:260] loss = 49.485367, step = 5601 (0.216 sec) I0809 20:42:18.147011 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 445.177 I0809 20:42:18.148031 139715572925888 basic_session_run_hooks.py:260] loss = 21.976042, step = 5701 (0.225 sec) I0809 20:42:18.384932 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 420.274 I0809 20:42:18.386049 139715572925888 basic_session_run_hooks.py:260] loss = 47.708855, step = 5801 (0.238 sec) I0809 20:42:18.605120 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 454.131 I0809 20:42:18.606066 139715572925888 basic_session_run_hooks.py:260] loss = 38.32365, step = 5901 (0.220 sec) I0809 20:42:18.849222 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 409.718 I0809 20:42:18.850502 139715572925888 basic_session_run_hooks.py:260] loss = 19.839962, step = 6001 (0.244 sec) I0809 20:42:19.075581 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 441.809 I0809 20:42:19.076723 139715572925888 basic_session_run_hooks.py:260] loss = 35.977077, step = 6101 (0.226 sec) I0809 20:42:19.309335 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 427.707 I0809 20:42:19.310457 139715572925888 basic_session_run_hooks.py:260] loss = 40.517097, step = 6201 (0.234 sec) I0809 20:42:19.544992 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 424.403 I0809 20:42:19.546072 139715572925888 basic_session_run_hooks.py:260] loss = 28.959423, step = 6301 (0.236 sec) I0809 20:42:19.782182 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 421.605 I0809 20:42:19.783279 139715572925888 basic_session_run_hooks.py:260] loss = 25.088074, step = 6401 (0.237 sec) I0809 20:42:20.022277 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 416.459 I0809 20:42:20.023374 139715572925888 basic_session_run_hooks.py:260] loss = 51.164604, step = 6501 (0.240 sec) I0809 20:42:20.248764 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 441.532 I0809 20:42:20.249742 139715572925888 basic_session_run_hooks.py:260] loss = 23.999224, step = 6601 (0.226 sec) I0809 20:42:20.469518 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 453.042 I0809 20:42:20.470803 139715572925888 basic_session_run_hooks.py:260] loss = 103.26223, step = 6701 (0.221 sec) I0809 20:42:20.702929 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 428.391 I0809 20:42:20.703821 139715572925888 basic_session_run_hooks.py:260] loss = 44.0259, step = 6801 (0.233 sec) I0809 20:42:20.949997 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 404.778 I0809 20:42:20.951296 139715572925888 basic_session_run_hooks.py:260] loss = 19.171732, step = 6901 (0.247 sec) I0809 20:42:21.216790 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 374.786 I0809 20:42:21.217852 139715572925888 basic_session_run_hooks.py:260] loss = 122.0697, step = 7001 (0.267 sec) I0809 20:42:21.435852 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 456.642 I0809 20:42:21.437046 139715572925888 basic_session_run_hooks.py:260] loss = 33.82604, step = 7101 (0.219 sec) I0809 20:42:21.669198 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 428.495 I0809 20:42:21.670361 139715572925888 basic_session_run_hooks.py:260] loss = 24.399328, step = 7201 (0.233 sec) I0809 20:42:21.912559 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 410.811 I0809 20:42:21.913444 139715572925888 basic_session_run_hooks.py:260] loss = 34.948128, step = 7301 (0.243 sec) I0809 20:42:22.130326 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 459.228 I0809 20:42:22.131231 139715572925888 basic_session_run_hooks.py:260] loss = 72.025986, step = 7401 (0.218 sec) I0809 20:42:22.342922 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 470.332 I0809 20:42:22.343796 139715572925888 basic_session_run_hooks.py:260] loss = 23.518652, step = 7501 (0.213 sec) I0809 20:42:22.550642 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 481.533 I0809 20:42:22.551698 139715572925888 basic_session_run_hooks.py:260] loss = 47.13624, step = 7601 (0.208 sec) I0809 20:42:22.765336 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 465.694 I0809 20:42:22.766264 139715572925888 basic_session_run_hooks.py:260] loss = 29.63544, step = 7701 (0.215 sec) I0809 20:42:22.983797 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 457.74 I0809 20:42:22.984637 139715572925888 basic_session_run_hooks.py:260] loss = 14.424541, step = 7801 (0.218 sec) I0809 20:42:23.160651 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 565.544 I0809 20:42:23.162517 139715572925888 basic_session_run_hooks.py:260] loss = 68.20079, step = 7901 (0.178 sec) I0809 20:42:23.371675 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 473.925 I0809 20:42:23.372550 139715572925888 basic_session_run_hooks.py:260] loss = 43.285156, step = 8001 (0.210 sec) I0809 20:42:23.575895 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 489.574 I0809 20:42:23.576782 139715572925888 basic_session_run_hooks.py:260] loss = 29.779673, step = 8101 (0.204 sec) I0809 20:42:23.760229 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 542.529 I0809 20:42:23.761007 139715572925888 basic_session_run_hooks.py:260] loss = 18.886053, step = 8201 (0.184 sec) I0809 20:42:23.954575 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 514.57 I0809 20:42:23.955420 139715572925888 basic_session_run_hooks.py:260] loss = 61.837727, step = 8301 (0.194 sec) I0809 20:42:24.172730 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 458.442 I0809 20:42:24.175014 139715572925888 basic_session_run_hooks.py:260] loss = 35.335217, step = 8401 (0.220 sec) I0809 20:42:24.363615 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 523.728 I0809 20:42:24.364455 139715572925888 basic_session_run_hooks.py:260] loss = 73.91724, step = 8501 (0.189 sec) I0809 20:42:24.545725 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 549.106 I0809 20:42:24.546555 139715572925888 basic_session_run_hooks.py:260] loss = 32.35276, step = 8601 (0.182 sec) I0809 20:42:24.728656 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 546.699 I0809 20:42:24.729615 139715572925888 basic_session_run_hooks.py:260] loss = 9.280526, step = 8701 (0.183 sec) I0809 20:42:24.917541 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 529.392 I0809 20:42:24.918524 139715572925888 basic_session_run_hooks.py:260] loss = 52.10976, step = 8801 (0.189 sec) I0809 20:42:25.101804 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 542.696 I0809 20:42:25.102653 139715572925888 basic_session_run_hooks.py:260] loss = 31.689117, step = 8901 (0.184 sec) I0809 20:42:25.308911 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 482.814 I0809 20:42:25.309767 139715572925888 basic_session_run_hooks.py:260] loss = 53.8692, step = 9001 (0.207 sec) I0809 20:42:25.512624 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 490.993 I0809 20:42:25.514770 139715572925888 basic_session_run_hooks.py:260] loss = 29.0009, step = 9101 (0.205 sec) I0809 20:42:25.703310 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 524.346 I0809 20:42:25.704185 139715572925888 basic_session_run_hooks.py:260] loss = 27.948374, step = 9201 (0.189 sec) I0809 20:42:25.884633 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 551.514 I0809 20:42:25.885483 139715572925888 basic_session_run_hooks.py:260] loss = 34.51767, step = 9301 (0.181 sec) I0809 20:42:26.088884 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 489.596 I0809 20:42:26.089658 139715572925888 basic_session_run_hooks.py:260] loss = 40.559605, step = 9401 (0.204 sec) I0809 20:42:26.288553 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 500.786 I0809 20:42:26.289357 139715572925888 basic_session_run_hooks.py:260] loss = 56.587803, step = 9501 (0.200 sec) I0809 20:42:26.471863 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 545.548 I0809 20:42:26.472719 139715572925888 basic_session_run_hooks.py:260] loss = 30.86315, step = 9601 (0.183 sec) I0809 20:42:26.654179 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 548.537 I0809 20:42:26.654956 139715572925888 basic_session_run_hooks.py:260] loss = 42.84465, step = 9701 (0.182 sec) I0809 20:42:26.836205 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 549.348 I0809 20:42:26.836946 139715572925888 basic_session_run_hooks.py:260] loss = 32.10369, step = 9801 (0.182 sec) I0809 20:42:27.029071 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 518.444 I0809 20:42:27.029989 139715572925888 basic_session_run_hooks.py:260] loss = 50.195885, step = 9901 (0.193 sec) I0809 20:42:27.224503 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 511.713 I0809 20:42:27.225347 139715572925888 basic_session_run_hooks.py:260] loss = 21.120586, step = 10001 (0.195 sec) I0809 20:42:27.402610 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 561.508 I0809 20:42:27.403367 139715572925888 basic_session_run_hooks.py:260] loss = 48.493736, step = 10101 (0.178 sec) I0809 20:42:27.594835 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 520.208 I0809 20:42:27.595704 139715572925888 basic_session_run_hooks.py:260] loss = 27.813099, step = 10201 (0.192 sec) I0809 20:42:27.796318 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 496.307 I0809 20:42:27.797118 139715572925888 basic_session_run_hooks.py:260] loss = 27.684166, step = 10301 (0.201 sec) I0809 20:42:28.001615 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 487.151 I0809 20:42:28.002585 139715572925888 basic_session_run_hooks.py:260] loss = 41.948124, step = 10401 (0.205 sec) I0809 20:42:28.189969 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 530.828 I0809 20:42:28.190793 139715572925888 basic_session_run_hooks.py:260] loss = 19.98607, step = 10501 (0.188 sec) I0809 20:42:28.365161 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 570.867 I0809 20:42:28.366378 139715572925888 basic_session_run_hooks.py:260] loss = 87.41141, step = 10601 (0.176 sec) I0809 20:42:28.568749 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 491.147 I0809 20:42:28.569617 139715572925888 basic_session_run_hooks.py:260] loss = 27.779526, step = 10701 (0.203 sec) I0809 20:42:28.772715 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 490.268 I0809 20:42:28.773514 139715572925888 basic_session_run_hooks.py:260] loss = 32.40663, step = 10801 (0.204 sec) I0809 20:42:28.960273 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 533.254 I0809 20:42:28.961036 139715572925888 basic_session_run_hooks.py:260] loss = 8.738708, step = 10901 (0.188 sec) I0809 20:42:29.170430 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 475.769 I0809 20:42:29.171308 139715572925888 basic_session_run_hooks.py:260] loss = 17.224623, step = 11001 (0.210 sec) I0809 20:42:29.378458 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 480.693 I0809 20:42:29.379267 139715572925888 basic_session_run_hooks.py:260] loss = 129.82169, step = 11101 (0.208 sec) I0809 20:42:29.581667 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 492.16 I0809 20:42:29.582468 139715572925888 basic_session_run_hooks.py:260] loss = 59.290813, step = 11201 (0.203 sec) I0809 20:42:29.772501 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 524.053 I0809 20:42:29.773473 139715572925888 basic_session_run_hooks.py:260] loss = 48.67883, step = 11301 (0.191 sec) I0809 20:42:29.952811 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 554.551 I0809 20:42:29.953942 139715572925888 basic_session_run_hooks.py:260] loss = 31.161797, step = 11401 (0.180 sec) I0809 20:42:30.128308 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 569.762 I0809 20:42:30.129107 139715572925888 basic_session_run_hooks.py:260] loss = 85.07748, step = 11501 (0.175 sec) I0809 20:42:30.307543 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 557.967 I0809 20:42:30.308506 139715572925888 basic_session_run_hooks.py:260] loss = 37.616295, step = 11601 (0.179 sec) I0809 20:42:30.481650 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 574.422 I0809 20:42:30.482454 139715572925888 basic_session_run_hooks.py:260] loss = 73.01372, step = 11701 (0.174 sec) I0809 20:42:30.664022 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 548.27 I0809 20:42:30.665050 139715572925888 basic_session_run_hooks.py:260] loss = 35.499306, step = 11801 (0.183 sec) I0809 20:42:30.870676 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 483.863 I0809 20:42:30.871476 139715572925888 basic_session_run_hooks.py:260] loss = 18.893257, step = 11901 (0.206 sec) I0809 20:42:31.054171 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 544.962 I0809 20:42:31.054955 139715572925888 basic_session_run_hooks.py:260] loss = 70.91739, step = 12001 (0.183 sec) I0809 20:42:31.227020 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 578.58 I0809 20:42:31.227952 139715572925888 basic_session_run_hooks.py:260] loss = 47.56703, step = 12101 (0.173 sec) I0809 20:42:31.437911 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 474.165 I0809 20:42:31.438690 139715572925888 basic_session_run_hooks.py:260] loss = 21.411425, step = 12201 (0.211 sec) I0809 20:42:31.643642 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 486.081 I0809 20:42:31.644399 139715572925888 basic_session_run_hooks.py:260] loss = 14.617336, step = 12301 (0.206 sec) I0809 20:42:31.819405 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 568.928 I0809 20:42:31.820190 139715572925888 basic_session_run_hooks.py:260] loss = 64.563515, step = 12401 (0.176 sec) I0809 20:42:31.993980 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 572.797 I0809 20:42:31.994754 139715572925888 basic_session_run_hooks.py:260] loss = 30.201702, step = 12501 (0.175 sec) I0809 20:42:32.177431 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 545.191 I0809 20:42:32.178247 139715572925888 basic_session_run_hooks.py:260] loss = 62.845848, step = 12601 (0.183 sec) I0809 20:42:32.356033 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 559.838 I0809 20:42:32.356898 139715572925888 basic_session_run_hooks.py:260] loss = 28.99841, step = 12701 (0.179 sec) I0809 20:42:32.530317 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 573.773 I0809 20:42:32.531068 139715572925888 basic_session_run_hooks.py:260] loss = 47.354977, step = 12801 (0.174 sec) I0809 20:42:32.706470 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 567.717 I0809 20:42:32.707315 139715572925888 basic_session_run_hooks.py:260] loss = 61.95388, step = 12901 (0.176 sec) I0809 20:42:32.881021 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 572.899 I0809 20:42:32.881838 139715572925888 basic_session_run_hooks.py:260] loss = 38.48935, step = 13001 (0.175 sec) I0809 20:42:33.056893 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 568.676 I0809 20:42:33.057935 139715572925888 basic_session_run_hooks.py:260] loss = 31.168634, step = 13101 (0.176 sec) I0809 20:42:33.240495 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 544.534 I0809 20:42:33.241319 139715572925888 basic_session_run_hooks.py:260] loss = 28.266655, step = 13201 (0.183 sec) I0809 20:42:33.449738 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 477.941 I0809 20:42:33.450530 139715572925888 basic_session_run_hooks.py:260] loss = 81.25282, step = 13301 (0.209 sec) I0809 20:42:33.640973 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 522.876 I0809 20:42:33.641772 139715572925888 basic_session_run_hooks.py:260] loss = 26.1285, step = 13401 (0.191 sec) I0809 20:42:33.839467 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 503.849 I0809 20:42:33.840281 139715572925888 basic_session_run_hooks.py:260] loss = 44.024464, step = 13501 (0.199 sec) I0809 20:42:34.032309 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 518.535 I0809 20:42:34.033133 139715572925888 basic_session_run_hooks.py:260] loss = 31.122879, step = 13601 (0.193 sec) I0809 20:42:34.240973 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 479.249 I0809 20:42:34.241849 139715572925888 basic_session_run_hooks.py:260] loss = 10.125617, step = 13701 (0.209 sec) I0809 20:42:34.434063 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 517.866 I0809 20:42:34.434878 139715572925888 basic_session_run_hooks.py:260] loss = 70.14729, step = 13801 (0.193 sec) I0809 20:42:34.621891 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 532.453 I0809 20:42:34.622862 139715572925888 basic_session_run_hooks.py:260] loss = 27.217846, step = 13901 (0.188 sec) I0809 20:42:34.797688 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 568.896 I0809 20:42:34.799663 139715572925888 basic_session_run_hooks.py:260] loss = 35.97609, step = 14001 (0.177 sec) I0809 20:42:34.979423 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 550.543 I0809 20:42:34.980413 139715572925888 basic_session_run_hooks.py:260] loss = 26.12788, step = 14101 (0.181 sec) I0809 20:42:35.180140 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 497.973 I0809 20:42:35.180926 139715572925888 basic_session_run_hooks.py:260] loss = 64.12424, step = 14201 (0.201 sec) I0809 20:42:35.361011 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 552.826 I0809 20:42:35.362108 139715572925888 basic_session_run_hooks.py:260] loss = 32.04691, step = 14301 (0.181 sec) I0809 20:42:35.539427 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 560.453 I0809 20:42:35.540393 139715572925888 basic_session_run_hooks.py:260] loss = 52.47918, step = 14401 (0.178 sec) I0809 20:42:35.743329 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 490.417 I0809 20:42:35.744175 139715572925888 basic_session_run_hooks.py:260] loss = 24.953758, step = 14501 (0.204 sec) I0809 20:42:35.940030 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 508.443 I0809 20:42:35.940902 139715572925888 basic_session_run_hooks.py:260] loss = 34.942688, step = 14601 (0.197 sec) I0809 20:42:36.117806 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 562.461 I0809 20:42:36.118593 139715572925888 basic_session_run_hooks.py:260] loss = 55.465492, step = 14701 (0.178 sec) I0809 20:42:36.316770 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 502.589 I0809 20:42:36.317580 139715572925888 basic_session_run_hooks.py:260] loss = 57.6858, step = 14801 (0.199 sec) I0809 20:42:36.524956 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 480.443 I0809 20:42:36.527163 139715572925888 basic_session_run_hooks.py:260] loss = 35.631615, step = 14901 (0.210 sec) I0809 20:42:36.708018 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 546.17 I0809 20:42:36.708904 139715572925888 basic_session_run_hooks.py:260] loss = 17.542585, step = 15001 (0.182 sec) I0809 20:42:36.890199 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 548.863 I0809 20:42:36.890944 139715572925888 basic_session_run_hooks.py:260] loss = 53.94641, step = 15101 (0.182 sec) I0809 20:42:37.079205 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 529.118 I0809 20:42:37.080248 139715572925888 basic_session_run_hooks.py:260] loss = 33.645924, step = 15201 (0.189 sec) I0809 20:42:37.256110 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 565.269 I0809 20:42:37.256959 139715572925888 basic_session_run_hooks.py:260] loss = 45.093517, step = 15301 (0.177 sec) I0809 20:42:37.439413 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 545.554 I0809 20:42:37.440299 139715572925888 basic_session_run_hooks.py:260] loss = 34.845398, step = 15401 (0.183 sec) I0809 20:42:37.642906 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 491.379 I0809 20:42:37.643713 139715572925888 basic_session_run_hooks.py:260] loss = 10.090223, step = 15501 (0.203 sec) I0809 20:42:37.843957 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 497.42 I0809 20:42:37.844818 139715572925888 basic_session_run_hooks.py:260] loss = 51.057957, step = 15601 (0.201 sec) I0809 20:42:38.021238 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 564.099 I0809 20:42:38.022386 139715572925888 basic_session_run_hooks.py:260] loss = 37.113983, step = 15701 (0.178 sec) I0809 20:42:38.198247 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 564.892 I0809 20:42:38.198960 139715572925888 basic_session_run_hooks.py:260] loss = 49.76435, step = 15801 (0.177 sec) I0809 20:42:38.373404 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 570.949 I0809 20:42:38.374464 139715572925888 basic_session_run_hooks.py:260] loss = 16.172215, step = 15901 (0.176 sec) I0809 20:42:38.549565 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 567.679 I0809 20:42:38.550405 139715572925888 basic_session_run_hooks.py:260] loss = 27.929382, step = 16001 (0.176 sec) I0809 20:42:38.727065 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 563.431 I0809 20:42:38.727866 139715572925888 basic_session_run_hooks.py:260] loss = 39.391487, step = 16101 (0.177 sec) I0809 20:42:38.903073 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 568.05 I0809 20:42:38.903871 139715572925888 basic_session_run_hooks.py:260] loss = 87.43811, step = 16201 (0.176 sec) I0809 20:42:39.086318 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 545.75 I0809 20:42:39.087182 139715572925888 basic_session_run_hooks.py:260] loss = 29.880781, step = 16301 (0.183 sec) I0809 20:42:39.270591 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 542.682 I0809 20:42:39.271349 139715572925888 basic_session_run_hooks.py:260] loss = 15.598202, step = 16401 (0.184 sec) I0809 20:42:39.459044 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 531.031 I0809 20:42:39.459912 139715572925888 basic_session_run_hooks.py:260] loss = 50.94569, step = 16501 (0.189 sec) I0809 20:42:39.656713 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 505.533 I0809 20:42:39.657488 139715572925888 basic_session_run_hooks.py:260] loss = 44.291523, step = 16601 (0.198 sec) I0809 20:42:39.839131 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 548.188 I0809 20:42:39.839967 139715572925888 basic_session_run_hooks.py:260] loss = 86.52313, step = 16701 (0.182 sec) I0809 20:42:40.027645 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 530.481 I0809 20:42:40.028501 139715572925888 basic_session_run_hooks.py:260] loss = 18.461927, step = 16801 (0.189 sec) I0809 20:42:40.249556 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 450.617 I0809 20:42:40.250471 139715572925888 basic_session_run_hooks.py:260] loss = 36.88288, step = 16901 (0.222 sec) I0809 20:42:40.453813 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 489.634 I0809 20:42:40.454663 139715572925888 basic_session_run_hooks.py:260] loss = 45.47953, step = 17001 (0.204 sec) I0809 20:42:40.655236 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 496.458 I0809 20:42:40.656106 139715572925888 basic_session_run_hooks.py:260] loss = 18.319965, step = 17101 (0.201 sec) I0809 20:42:40.838766 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 544.849 I0809 20:42:40.839612 139715572925888 basic_session_run_hooks.py:260] loss = 23.532417, step = 17201 (0.184 sec) I0809 20:42:41.045156 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 484.466 I0809 20:42:41.045936 139715572925888 basic_session_run_hooks.py:260] loss = 21.925976, step = 17301 (0.206 sec) I0809 20:42:41.243024 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 505.398 I0809 20:42:41.243818 139715572925888 basic_session_run_hooks.py:260] loss = 72.24338, step = 17401 (0.198 sec) I0809 20:42:41.429198 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 537.213 I0809 20:42:41.429965 139715572925888 basic_session_run_hooks.py:260] loss = 23.276207, step = 17501 (0.186 sec) I0809 20:42:41.618097 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 529.325 I0809 20:42:41.618875 139715572925888 basic_session_run_hooks.py:260] loss = 55.87291, step = 17601 (0.189 sec) I0809 20:42:41.810945 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 518.596 I0809 20:42:41.811880 139715572925888 basic_session_run_hooks.py:260] loss = 28.344604, step = 17701 (0.193 sec) I0809 20:42:41.988164 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 564.341 I0809 20:42:41.989120 139715572925888 basic_session_run_hooks.py:260] loss = 15.306099, step = 17801 (0.177 sec) I0809 20:42:42.194932 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 483.51 I0809 20:42:42.195729 139715572925888 basic_session_run_hooks.py:260] loss = 50.228928, step = 17901 (0.207 sec) I0809 20:42:42.380472 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 539.031 I0809 20:42:42.381618 139715572925888 basic_session_run_hooks.py:260] loss = 34.677082, step = 18001 (0.186 sec) I0809 20:42:42.559516 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 558.791 I0809 20:42:42.560199 139715572925888 basic_session_run_hooks.py:260] loss = 34.266094, step = 18101 (0.179 sec) I0809 20:42:42.734060 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 572.586 I0809 20:42:42.734889 139715572925888 basic_session_run_hooks.py:260] loss = 20.596783, step = 18201 (0.175 sec) I0809 20:42:42.908487 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 573.385 I0809 20:42:42.909456 139715572925888 basic_session_run_hooks.py:260] loss = 95.104004, step = 18301 (0.175 sec) I0809 20:42:43.088056 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 556.827 I0809 20:42:43.088960 139715572925888 basic_session_run_hooks.py:260] loss = 47.536537, step = 18401 (0.179 sec) I0809 20:42:43.290330 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 494.355 I0809 20:42:43.291104 139715572925888 basic_session_run_hooks.py:260] loss = 63.25869, step = 18501 (0.202 sec) I0809 20:42:43.465354 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 571.38 I0809 20:42:43.466170 139715572925888 basic_session_run_hooks.py:260] loss = 49.56587, step = 18601 (0.175 sec) I0809 20:42:43.659379 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 515.4 I0809 20:42:43.660370 139715572925888 basic_session_run_hooks.py:260] loss = 19.38341, step = 18701 (0.194 sec) I0809 20:42:43.836994 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 563.047 I0809 20:42:43.837881 139715572925888 basic_session_run_hooks.py:260] loss = 69.12962, step = 18801 (0.178 sec) I0809 20:42:44.014785 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 562.465 I0809 20:42:44.015649 139715572925888 basic_session_run_hooks.py:260] loss = 43.236244, step = 18901 (0.178 sec) I0809 20:42:44.205502 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 524.304 I0809 20:42:44.206520 139715572925888 basic_session_run_hooks.py:260] loss = 26.287243, step = 19001 (0.191 sec) I0809 20:42:44.390386 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 540.879 I0809 20:42:44.391190 139715572925888 basic_session_run_hooks.py:260] loss = 13.419331, step = 19101 (0.185 sec) I0809 20:42:44.570664 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 554.708 I0809 20:42:44.571482 139715572925888 basic_session_run_hooks.py:260] loss = 67.351204, step = 19201 (0.180 sec) I0809 20:42:44.752331 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 550.461 I0809 20:42:44.753911 139715572925888 basic_session_run_hooks.py:260] loss = 22.54165, step = 19301 (0.182 sec) I0809 20:42:44.931735 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 557.407 I0809 20:42:44.932554 139715572925888 basic_session_run_hooks.py:260] loss = 66.86839, step = 19401 (0.179 sec) I0809 20:42:45.108572 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 565.461 I0809 20:42:45.109344 139715572925888 basic_session_run_hooks.py:260] loss = 24.244747, step = 19501 (0.177 sec) I0809 20:42:45.298085 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 527.68 I0809 20:42:45.298973 139715572925888 basic_session_run_hooks.py:260] loss = 13.969262, step = 19601 (0.190 sec) I0809 20:42:45.492240 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 515.126 I0809 20:42:45.493179 139715572925888 basic_session_run_hooks.py:260] loss = 76.86172, step = 19701 (0.194 sec) I0809 20:42:45.672214 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 555.661 I0809 20:42:45.673048 139715572925888 basic_session_run_hooks.py:260] loss = 36.777737, step = 19801 (0.180 sec) I0809 20:42:45.850156 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 561.93 I0809 20:42:45.851198 139715572925888 basic_session_run_hooks.py:260] loss = 26.178139, step = 19901 (0.178 sec) I0809 20:42:46.027370 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 564.242 I0809 20:42:46.028316 139715572925888 basic_session_run_hooks.py:260] loss = 23.499826, step = 20001 (0.177 sec) I0809 20:42:46.214497 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 534.548 I0809 20:42:46.216654 139715572925888 basic_session_run_hooks.py:260] loss = 50.725372, step = 20101 (0.188 sec) I0809 20:42:46.422430 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 480.834 I0809 20:42:46.423405 139715572925888 basic_session_run_hooks.py:260] loss = 44.494545, step = 20201 (0.207 sec) I0809 20:42:46.599042 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 566.524 I0809 20:42:46.599921 139715572925888 basic_session_run_hooks.py:260] loss = 33.04119, step = 20301 (0.177 sec) I0809 20:42:46.779720 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 553.116 I0809 20:42:46.780584 139715572925888 basic_session_run_hooks.py:260] loss = 32.721966, step = 20401 (0.181 sec) I0809 20:42:46.975269 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 511.355 I0809 20:42:46.976063 139715572925888 basic_session_run_hooks.py:260] loss = 37.798504, step = 20501 (0.195 sec) I0809 20:42:47.161967 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 535.676 I0809 20:42:47.162949 139715572925888 basic_session_run_hooks.py:260] loss = 78.754654, step = 20601 (0.187 sec) I0809 20:42:47.364392 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 493.978 I0809 20:42:47.365150 139715572925888 basic_session_run_hooks.py:260] loss = 25.544836, step = 20701 (0.202 sec) I0809 20:42:47.564527 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 499.706 I0809 20:42:47.565649 139715572925888 basic_session_run_hooks.py:260] loss = 75.64566, step = 20801 (0.200 sec) I0809 20:42:47.746356 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 549.946 I0809 20:42:47.747339 139715572925888 basic_session_run_hooks.py:260] loss = 26.593527, step = 20901 (0.182 sec) I0809 20:42:47.927987 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 550.525 I0809 20:42:47.928855 139715572925888 basic_session_run_hooks.py:260] loss = 61.141468, step = 21001 (0.182 sec) I0809 20:42:48.110887 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 546.81 I0809 20:42:48.111720 139715572925888 basic_session_run_hooks.py:260] loss = 30.508247, step = 21101 (0.183 sec) I0809 20:42:48.291301 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 554.299 I0809 20:42:48.292495 139715572925888 basic_session_run_hooks.py:260] loss = 40.44137, step = 21201 (0.181 sec) I0809 20:42:48.477852 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 535.969 I0809 20:42:48.478656 139715572925888 basic_session_run_hooks.py:260] loss = 37.764538, step = 21301 (0.186 sec) I0809 20:42:48.680670 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 493.08 I0809 20:42:48.681519 139715572925888 basic_session_run_hooks.py:260] loss = 12.602806, step = 21401 (0.203 sec) I0809 20:42:48.887533 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 483.386 I0809 20:42:48.888473 139715572925888 basic_session_run_hooks.py:260] loss = 66.86643, step = 21501 (0.207 sec) I0809 20:42:49.094826 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 482.493 I0809 20:42:49.095681 139715572925888 basic_session_run_hooks.py:260] loss = 26.889442, step = 21601 (0.207 sec) I0809 20:42:49.303699 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 478.714 I0809 20:42:49.304606 139715572925888 basic_session_run_hooks.py:260] loss = 34.189625, step = 21701 (0.209 sec) I0809 20:42:49.505270 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 496.374 I0809 20:42:49.505960 139715572925888 basic_session_run_hooks.py:260] loss = 27.283232, step = 21801 (0.201 sec) I0809 20:42:49.687783 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 547.603 I0809 20:42:49.688620 139715572925888 basic_session_run_hooks.py:260] loss = 25.419231, step = 21901 (0.183 sec) I0809 20:42:49.893177 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 486.855 I0809 20:42:49.893961 139715572925888 basic_session_run_hooks.py:260] loss = 51.8715, step = 22001 (0.205 sec) I0809 20:42:50.094173 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 497.505 I0809 20:42:50.094955 139715572925888 basic_session_run_hooks.py:260] loss = 55.552635, step = 22101 (0.201 sec) I0809 20:42:50.284781 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 524.643 I0809 20:42:50.285800 139715572925888 basic_session_run_hooks.py:260] loss = 21.97137, step = 22201 (0.191 sec) I0809 20:42:50.479226 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 514.31 I0809 20:42:50.480102 139715572925888 basic_session_run_hooks.py:260] loss = 18.18085, step = 22301 (0.194 sec) I0809 20:42:50.657841 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 560.096 I0809 20:42:50.658891 139715572925888 basic_session_run_hooks.py:260] loss = 56.170902, step = 22401 (0.179 sec) I0809 20:42:50.835670 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 562.079 I0809 20:42:50.836515 139715572925888 basic_session_run_hooks.py:260] loss = 44.311165, step = 22501 (0.178 sec) I0809 20:42:51.014305 139715572925888 basic_session_run_hooks.py:692] global_step/sec: 559.767 I0809 20:42:51.015145 139715572925888 basic_session_run_hooks.py:260] loss = 46.684082, step = 22601 (0.179 sec) I0809 20:42:51.116461 139715572925888 basic_session_run_hooks.py:606] Saving checkpoints for 22650 into house_trained/model.ckpt. I0809 20:42:51.181637 139715572925888 estimator.py:1145] Calling model_fn. I0809 20:42:51.181864 139715572925888 estimator.py:1145] Calling model_fn. I0809 20:42:51.391630 139715572925888 estimator.py:1147] Done calling model_fn. W0809 20:42:51.392601 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:50: The name tf.metrics.root_mean_squared_error is deprecated. Please use tf.compat.v1.metrics.root_mean_squared_error instead. I0809 20:42:51.410948 139715572925888 estimator.py:1147] Done calling model_fn. I0809 20:42:51.430068 139715572925888 evaluation.py:255] Starting evaluation at 2019-08-09T20:42:51Z I0809 20:42:51.510735 139715572925888 monitored_session.py:240] Graph was finalized. W0809 20:42:51.511337 139715572925888 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. I0809 20:42:51.512527 139715572925888 saver.py:1280] Restoring parameters from house_trained/model.ckpt-22650 I0809 20:42:51.556935 139715572925888 session_manager.py:500] Running local_init_op. I0809 20:42:51.588593 139715572925888 session_manager.py:502] Done running local_init_op. I0809 20:42:51.886137 139715572925888 evaluation.py:275] Finished evaluation at 2019-08-09-20:42:51 I0809 20:42:51.886435 139715572925888 estimator.py:2039] Saving dict for global step 22650: average_loss = 1.2751312, global_step = 22650, label/mean = 2.0454624, loss = 4320.1445, prediction/mean = 2.022154, rmse = 112921.7 I0809 20:42:51.945333 139715572925888 estimator.py:2099] Saving 'checkpoint_path' summary for global step 22650: house_trained/model.ckpt-22650 I0809 20:42:51.982523 139715572925888 estimator.py:368] Loss for final step: 33.386158.
MIT
Coursera/Art and Science of Machine Learning/Improve model accuracy by hyperparameter tuning with AI Platform.ipynb
helpthx/Path_through_Data_Science_2019
Create hyperparam.yaml
%%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MINIMIZE maxTrials: 5 maxParallelTrials: 1 hyperparameterMetricTag: rmse params: - parameterName: batch_size type: INTEGER minValue: 8 maxValue: 64 scaleType: UNIT_LINEAR_SCALE - parameterName: learning_rate type: DOUBLE minValue: 0.01 maxValue: 0.1 scaleType: UNIT_LOG_SCALE %%bash OUTDIR=gs://${BUCKET}/house_trained # CHANGE bucket name appropriately gsutil rm -rf $OUTDIR export PYTHONPATH=${PYTHONPATH}:${PWD}/house_prediction_module gcloud ai-platform jobs submit training house_$(date -u +%y%m%d_%H%M%S) \ --config=hyperparam.yaml \ --module-name=trainer.task \ --package-path=$(pwd)/house_prediction_module/trainer \ --job-dir=$OUTDIR \ --runtime-version=$TFVERSION \ --\ --output_dir=$OUTDIR \ !gcloud ai-platform jobs describe house_190809_204253 # CHANGE jobId appropriately
createTime: '2019-08-09T20:42:55Z' etag: zU1W9lhyf0w= jobId: house_190809_204253 startTime: '2019-08-09T20:42:59Z' state: RUNNING trainingInput: args: - --output_dir=gs://qwiklabs-gcp-faf328caac1ef9a0/house_trained hyperparameters: goal: MINIMIZE hyperparameterMetricTag: rmse maxParallelTrials: 1 maxTrials: 5 params: - maxValue: 64.0 minValue: 8.0 parameterName: batch_size scaleType: UNIT_LINEAR_SCALE type: INTEGER - maxValue: 0.1 minValue: 0.01 parameterName: learning_rate scaleType: UNIT_LOG_SCALE type: DOUBLE jobDir: gs://qwiklabs-gcp-faf328caac1ef9a0/house_trained packageUris: - gs://qwiklabs-gcp-faf328caac1ef9a0/house_trained/packages/2148c5e4ea8c7f8c90ee6fdaffa93a2f5fce6ef0bdb95b679c1067e97d0f01e7/trainer-0.0.0.tar.gz pythonModule: trainer.task region: us-east1 runtimeVersion: '1.8' trainingOutput: hyperparameterMetricTag: rmse isHyperparameterTuningJob: true View job in the Cloud Console at: https://console.cloud.google.com/mlengine/jobs/house_190809_204253?project=qwiklabs-gcp-faf328caac1ef9a0 View logs at: https://console.cloud.google.com/logs?resource=ml.googleapis.com%2Fjob_id%2Fhouse_190809_204253&project=qwiklabs-gcp-faf328caac1ef9a0 To take a quick anonymous survey, run: $ gcloud alpha survey
MIT
Coursera/Art and Science of Machine Learning/Improve model accuracy by hyperparameter tuning with AI Platform.ipynb
helpthx/Path_through_Data_Science_2019
KöhnIn this notebook I replicate Koehn (2015): _What's in an embedding? Analyzing word embeddings through multilingual evaluation_. This paper proposes to i) evaluate an embedding method on more than one language, and ii) evaluate an embedding model by how well its embeddings capture syntactic features. He uses an L2-regularized linear classifier, with an upper baseline that assigns the most frequent class. He finds that most methods perform similarly on this task, but that dependency based embeddings perform better. Dependency based embeddings particularly perform better when you decrease the dimensionality. Overall, the aim is to have an evalation method that tells you something about the structure of the learnt representations. He evaulates a range of different models on their ability to capture a number of different morphosyntactic features in a bunch of languages.**Embedding models tested:**- cbow- skip-gram- glove- dep- cca- brown**Features tested:**- pos- headpos (the pos of the word's head)- label- gender- case- number- tense**Languages tested:**- Basque- English- French- German- Hungarian- Polish- SwedishWord embeddings were trained on automatically PoS-tagged and dependency-parsed data using existing models. This is so the dependency-based embeddings can be trained. The evaluation is on hand-labelled data. English training data is a subset of Wikipedia; English test data comes from PTB. For all other languages, both the training and test data come from a shared task on parsing morphologically rich languages. Koehn trained embeddings with window size 5 and 11 and dimensionality 10, 100, 200.Dependency-based embeddings perform the best on almost all tasks. They even do well when the dimensionality is reduced to 10, while other methods perform poorly in this case.I'll need:- models- learnt representations- automatically labeled data- hand-labeled data
%matplotlib inline import os import csv import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() from sklearn.linear_model import LogisticRegression, LogisticRegressionCV from sklearn.model_selection import train_test_split, StratifiedKFold from sklearn.metrics import roc_curve, roc_auc_score, classification_report, confusion_matrix from sklearn.preprocessing import LabelEncoder data_path = '../../data' tmp_path = '../../tmp'
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Learnt representations GloVe
size = 50 fname = 'embeddings/glove.6B.{}d.txt'.format(size) glove_path = os.path.join(data_path, fname) glove = pd.read_csv(glove_path, sep=' ', header=None, index_col=0, quoting=csv.QUOTE_NONE) glove.head()
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Features
fname = 'UD_English/features.csv' features_path = os.path.join(data_path, os.path.join('evaluation/dependency', fname)) features = pd.read_csv(features_path).set_index('form') features.head() df = pd.merge(glove, features, how='inner', left_index=True, right_index=True) df.head()
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Prediction
def prepare_X_and_y(feature, data): """Return X and y ready for predicting feature from embeddings.""" relevant_data = data[data[feature].notnull()] columns = list(range(1, size+1)) X = relevant_data[columns] y = relevant_data[feature] train = relevant_data['set'] == 'train' test = (relevant_data['set'] == 'test') | (relevant_data['set'] == 'dev') X_train, X_test = X[train].values, X[test].values y_train, y_test = y[train].values, y[test].values return X_train, X_test, y_train, y_test def predict(model, X_test): """Wrapper for getting predictions.""" results = model.predict_proba(X_test) return np.array([t for f,t in results]).reshape(-1,1) def conmat(model, X_test, y_test): """Wrapper for sklearn's confusion matrix.""" y_pred = model.predict(X_test) c = confusion_matrix(y_test, y_pred) sns.heatmap(c, annot=True, fmt='d', xticklabels=model.classes_, yticklabels=model.classes_, cmap="YlGnBu", cbar=False) plt.ylabel('Ground truth') plt.xlabel('Prediction') def draw_roc(model, X_test, y_test): """Convenience function to draw ROC curve.""" y_pred = predict(model, X_test) fpr, tpr, thresholds = roc_curve(y_test, y_pred) roc = roc_auc_score(y_test, y_pred) label = r'$AUC={}$'.format(str(round(roc, 3))) plt.plot(fpr, tpr, label=label); plt.title('ROC') plt.xlabel('False positive rate'); plt.ylabel('True positive rate'); plt.legend(); def cross_val_auc(model, X, y): for _ in range(5): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y) model = model.fit(X_train, y_train) draw_roc(model, X_test, y_test) X_train, X_test, y_train, y_test = prepare_X_and_y('Tense', df) model = LogisticRegression(penalty='l2', solver='liblinear') model = model.fit(X_train, y_train) conmat(model, X_test, y_test) sns.distplot(model.coef_[0], rug=True, kde=False);
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Transfer Learning Template
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from torch.utils.data import DataLoader from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform
_____no_output_____
MIT
experiments/tl_1v2/cores-oracle.run1.limited/trials/29/trial.ipynb
stevester94/csc500-notebooks
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "n_shot", "n_query", "n_way", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_net", "datasets", "torch_default_dtype", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "x_shape", } from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) from steves_utils.ORACLE.utils_v2 import ( ALL_DISTANCES_FEET_NARROWED, ALL_RUNS, ALL_SERIAL_NUMBERS, ) standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["n_way"] = 8 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 50 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "source_loss" standalone_parameters["datasets"] = [ { "labels": ALL_SERIAL_NUMBERS, "domains": ALL_DISTANCES_FEET_NARROWED, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"), "source_or_target_dataset": "source", "x_transforms": ["unit_mag", "minus_two"], "episode_transforms": [], "domain_prefix": "ORACLE_" }, { "labels": ALL_NODES, "domains": ALL_DAYS, "num_examples_per_domain_per_label": 100, "pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), "source_or_target_dataset": "target", "x_transforms": ["unit_power", "times_zero"], "episode_transforms": [], "domain_prefix": "CORES_" } ] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # Parameters parameters = { "experiment_name": "tl_1v2:cores-oracle.run1.limited", "device": "cuda", "lr": 0.0001, "n_shot": 3, "n_query": 2, "train_k_factor": 3, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "n_way": 16, "datasets": [ { "labels": [ "1-10.", "1-11.", "1-15.", "1-16.", "1-17.", "1-18.", "1-19.", "10-4.", "10-7.", "11-1.", "11-14.", "11-17.", "11-20.", "11-7.", "13-20.", "13-8.", "14-10.", "14-11.", "14-14.", "14-7.", "15-1.", "15-20.", "16-1.", "16-16.", "17-10.", "17-11.", "17-2.", "19-1.", "19-16.", "19-19.", "19-20.", "19-3.", "2-10.", "2-11.", "2-17.", "2-18.", "2-20.", "2-3.", "2-4.", "2-5.", "2-6.", "2-7.", "2-8.", "3-13.", "3-18.", "3-3.", "4-1.", "4-10.", "4-11.", "4-19.", "5-5.", "6-15.", "7-10.", "7-14.", "8-18.", "8-20.", "8-3.", "8-8.", ], "domains": [1, 2, 3, 4, 5], "num_examples_per_domain_per_label": -1, "pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl", "source_or_target_dataset": "target", "x_transforms": [], "episode_transforms": [], "domain_prefix": "CORES_", }, { "labels": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "domains": [32, 38, 8, 44, 14, 50, 20, 26], "num_examples_per_domain_per_label": 2000, "pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl", "source_or_target_dataset": "source", "x_transforms": [], "episode_transforms": [], "domain_prefix": "ORACLE.run1_", }, ], "dataset_seed": 500, "seed": 500, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) if "x_shape" not in p: p.x_shape = [2,256] # Default to this if we dont supply x_shape supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() p.domains_source = [] p.domains_target = [] train_original_source = [] val_original_source = [] test_original_source = [] train_original_target = [] val_original_target = [] test_original_target = [] # global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag # global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag def add_dataset( labels, domains, pickle_path, x_transforms, episode_transforms, domain_prefix, num_examples_per_domain_per_label, source_or_target_dataset:str, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), ): if x_transforms == []: x_transform = None else: x_transform = get_chained_transform(x_transforms) if episode_transforms == []: episode_transform = None else: raise Exception("episode_transforms not implemented") episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1]) eaf = Episodic_Accessor_Factory( labels=labels, domains=domains, num_examples_per_domain_per_label=num_examples_per_domain_per_label, iterator_seed=iterator_seed, dataset_seed=dataset_seed, n_shot=n_shot, n_way=n_way, n_query=n_query, train_val_test_k_factors=train_val_test_k_factors, pickle_path=pickle_path, x_transform_func=x_transform, ) train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test() train = Lazy_Iterable_Wrapper(train, episode_transform) val = Lazy_Iterable_Wrapper(val, episode_transform) test = Lazy_Iterable_Wrapper(test, episode_transform) if source_or_target_dataset=="source": train_original_source.append(train) val_original_source.append(val) test_original_source.append(test) p.domains_source.extend( [domain_prefix + str(u) for u in domains] ) elif source_or_target_dataset=="target": train_original_target.append(train) val_original_target.append(val) test_original_target.append(test) p.domains_target.extend( [domain_prefix + str(u) for u in domains] ) else: raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}") for ds in p.datasets: add_dataset(**ds) # from steves_utils.CORES.utils import ( # ALL_NODES, # ALL_NODES_MINIMUM_1000_EXAMPLES, # ALL_DAYS # ) # add_dataset( # labels=ALL_NODES, # domains = ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"cores_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle1_{u}" # ) # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # add_dataset( # labels=ALL_SERIAL_NUMBERS, # domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}), # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"), # source_or_target_dataset="source", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"oracle2_{u}" # ) # add_dataset( # labels=list(range(19)), # domains = [0,1,2], # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"met_{u}" # ) # # from steves_utils.wisig.utils import ( # # ALL_NODES_MINIMUM_100_EXAMPLES, # # ALL_NODES_MINIMUM_500_EXAMPLES, # # ALL_NODES_MINIMUM_1000_EXAMPLES, # # ALL_DAYS # # ) # import steves_utils.wisig.utils as wisig # add_dataset( # labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES, # domains = wisig.ALL_DAYS, # num_examples_per_domain_per_label=100, # pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"), # source_or_target_dataset="target", # x_transform_func=global_x_transform_func, # domain_modifier=lambda u: f"wisig_{u}" # ) ################################### # Build the dataset ################################### train_original_source = Iterable_Aggregator(train_original_source, p.seed) val_original_source = Iterable_Aggregator(val_original_source, p.seed) test_original_source = Iterable_Aggregator(test_original_source, p.seed) train_original_target = Iterable_Aggregator(train_original_target, p.seed) val_original_target = Iterable_Aggregator(val_original_target, p.seed) test_original_target = Iterable_Aggregator(test_original_target, p.seed) # For CNN We only use X and Y. And we only train on the source. # Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) from steves_utils.transforms import get_average_magnitude, get_average_power print(set([u for u,_ in val_original_source])) print(set([u for u,_ in val_original_target])) s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source)) print(s_x) # for ds in [ # train_processed_source, # val_processed_source, # test_processed_source, # train_processed_target, # val_processed_target, # test_processed_target # ]: # for s_x, s_y, q_x, q_y, _ in ds: # for X in (s_x, q_x): # for x in X: # assert np.isclose(get_average_magnitude(x.numpy()), 1.0) # assert np.isclose(get_average_power(x.numpy()), 1.0) ################################### # Build the model ################################### # easfsl only wants a tuple for the shape model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment)
_____no_output_____
MIT
experiments/tl_1v2/cores-oracle.run1.limited/trials/29/trial.ipynb
stevester94/csc500-notebooks
Convert old input card1. meta and experiment
from ruamel.yaml import YAML from cvm.utils import get_inp import sys yaml = YAML() yaml.indent(mapping=4, sequence=4, offset=2) yaml.default_flow_style = None yaml.width = 120 inp = get_inp('<old_input_card.json>') meta = dict(host=inp['host'], impurity=inp['impurity'], prefix=inp['prefix'], description=inp['description'], structure=inp['structure']) experiment = dict(temperature=inp['experiment'][0]['temp'], concentration=inp['experiment'][0]['c']) tmp = {'meta': meta, 'experiment': experiment} tmp with open('input.yml', 'w') as f: yaml.dump(tmp, f)
_____no_output_____
BSD-3-Clause
samples/convert_old_input_card.ipynb
kidddddd1984/CVM
2. enegires
def extractor(s, prefix): print(s['label']) print(s['transfer']) print(s['temp']) data = s['datas'] lattice = data['lattice_c'] host=data['host_en'] n_ens = {} for i in range(11): s_i = str(i + 1) l = 'pair' + s_i n_ens[s_i + '_II'] = data[l][0]['energy'] n_ens[s_i + '_IH'] = data[l][1]['energy'] n_ens[s_i + '_HH'] = data[l][2]['energy'] normalizer = dict(lattice=lattice, **n_ens) clusters = dict( lattice=lattice, host=host, Rh4=data['tetra'][0]['energy'], Rh3Pd1=data['tetra'][1]['energy'], Rh2Pd2=data['tetra'][2]['energy'], Rh1Pd3=data['tetra'][3]['energy'], Pd4=data['tetra'][4]['energy'], ) n_name = prefix + '_normalizer.csv' c_name = prefix + '_clusters.csv' print(n_name) print(c_name) print() pd.DataFrame(normalizer).to_csv(n_name, index=False) pd.DataFrame(clusters).to_csv(c_name, index=False) for i, s in enumerate(inp['series']): extractor(s, str(i))
$T_\mathrm{FD}=800$K [[1, 11, 2]] [400, 1290, 50] 0_normalizer.csv 0_clusters.csv $T_\mathrm{FD}=1000$K [[1, 11, 2]] [400, 1550, 50] 1_normalizer.csv 1_clusters.csv $T_\mathrm{FD}=1200$K [[1, 11, 2]] [400, 1700, 50] 2_normalizer.csv 2_clusters.csv $T_\mathrm{FD}=1400$K [[1, 11, 2]] [500, 1700, 50] 3_normalizer.csv 3_clusters.csv $T_\mathrm{FD}=1600$K [[1, 11, 2]] [500, 1870, 50] 4_normalizer.csv 4_clusters.csv
BSD-3-Clause
samples/convert_old_input_card.ipynb
kidddddd1984/CVM
import torch from torchvision.transforms import ToTensor, Normalize, Compose from torchvision.datasets import MNIST import torch.nn as nn from torch.utils.data import DataLoader from torchvision.utils import save_image import os class DeviceDataLoader: def __init__(self, dl, device): self.dl = dl self.device = device def __iter__(self): for b in self.dl: yield self.to_device(b, self.device) def __len__(self): return len(self.dl) def to_device(self, data, device): if isinstance(data, (list, tuple)): return [self.to_device(x, device) for x in data] return data.to(device, non_blocking=True) class MNIST_GANS: def __init__(self, dataset, image_size, device, num_epochs=50, loss_function=nn.BCELoss(), batch_size=100, hidden_size=2561, latent_size=64): self.device = device bare_data_loader = DataLoader(dataset, batch_size, shuffle=True) self.data_loader = DeviceDataLoader(bare_data_loader, device) self.loss_function = loss_function self.hidden_size = hidden_size self.latent_size = latent_size self.batch_size = batch_size self.D = nn.Sequential( nn.Linear(image_size, hidden_size), nn.LeakyReLU(0.2), nn.Linear(hidden_size, hidden_size), nn.LeakyReLU(0.2), nn.Linear(hidden_size, 1), nn.Sigmoid()) self.G = nn.Sequential( nn.Linear(latent_size, hidden_size), nn.ReLU(), nn.Linear(hidden_size, hidden_size), nn.ReLU(), nn.Linear(hidden_size, image_size), nn.Tanh()) self.d_optimizer = torch.optim.Adam(self.D.parameters(), lr=0.0002) self.g_optimizer = torch.optim.Adam(self.G.parameters(), lr=0.0002) self.sample_dir = './../data/mnist_samples' if not os.path.exists(self.sample_dir): os.makedirs(self.sample_dir) self.G.to(device) self.D.to(device) self.sample_vectors = torch.randn(self.batch_size, self.latent_size).to(self.device) self.num_epochs = num_epochs @staticmethod def denormalize(x): out = (x + 1) / 2 return out.clamp(0, 1) def reset_grad(self): self.d_optimizer.zero_grad() self.g_optimizer.zero_grad() def train_discriminator(self, images): real_labels = torch.ones(self.batch_size, 1).to(self.device) fake_labels = torch.zeros(self.batch_size, 1).to(self.device) outputs = self.D(images) d_loss_real = self.loss_function(outputs, real_labels) real_score = outputs new_sample_vectors = torch.randn(self.batch_size, self.latent_size).to(self.device) fake_images = self.G(new_sample_vectors) outputs = self.D(fake_images) d_loss_fake = self.loss_function(outputs, fake_labels) fake_score = outputs d_loss = d_loss_real + d_loss_fake self.reset_grad() d_loss.backward() self.d_optimizer.step() return d_loss, real_score, fake_score def train_generator(self): new_sample_vectors = torch.randn(self.batch_size, self.latent_size).to(self.device) fake_images = self.G(new_sample_vectors) labels = torch.ones(self.batch_size, 1).to(self.device) g_loss = self.loss_function(self.D(fake_images), labels) self.reset_grad() g_loss.backward() self.g_optimizer.step() return g_loss, fake_images def save_fake_images(self, index): fake_images = self.G(self.sample_vectors) fake_images = fake_images.reshape(fake_images.size(0), 1, 28, 28) fake_fname = 'fake_images-{0:0=4d}.png'.format(index) print('Saving', fake_fname) save_image(self.denormalize(fake_images), os.path.join(self.sample_dir, fake_fname), nrow=10) def run(self): total_step = len(self.data_loader) d_losses, g_losses, real_scores, fake_scores = [], [], [], [] for epoch in range(self.num_epochs): for i, (images, _) in enumerate(self.data_loader): images = images.reshape(self.batch_size, -1) d_loss, real_score, fake_score = self.train_discriminator(images) g_loss, fake_images = self.train_generator() if (i + 1) % 600 == 0: d_losses.append(d_loss.item()) g_losses.append(g_loss.item()) real_scores.append(real_score.mean().item()) fake_scores.append(fake_score.mean().item()) print(f'''Epoch [{epoch}/{self.num_epochs}], Step [{i + 1}/{ total_step}], d_loss: {d_loss.item():.4f}, g_loss: {g_loss.item():.4f}, D(x): { real_score.mean().item():.2f}, D(G(z)): {fake_score.mean().item():.2f}''') self.save_fake_images(epoch + 1) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') mnist = MNIST(root='./../data', train=True, download=True, transform=Compose([ToTensor(), Normalize(mean=(0.5,), std=(0.5,))])) image_size = mnist.data[0].flatten().size()[0] gans = MNIST_GANS(dataset=mnist, image_size=image_size, device=device) gans.run()
_____no_output_____
MIT
simple_generative_adversarial_net/MNIST_GANs.ipynb
s-mostafa-a/a
Matrix> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil A matrix is a square or rectangular array of numbers or symbols (termed elements), arranged in rows and columns. For instance:$$ \mathbf{A} = \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} \\a_{2,1} & a_{2,2} & a_{2,3} \end{bmatrix}$$$$ \mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \\4 & 5 & 6 \end{bmatrix}$$The matrix $\mathbf{A}$ above has two rows and three columns, it is a 2x3 matrix.In Numpy:
# Import the necessary libraries import numpy as np from IPython.display import display np.set_printoptions(precision=4) # number of digits of precision for floating point A = np.array([[1, 2, 3], [4, 5, 6]]) A
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
To get information about the number of elements and the structure of the matrix (in fact, a Numpy array), we can use:
print('A:\n', A) print('len(A) = ', len(A)) print('np.size(A) = ', np.size(A)) print('np.shape(A) = ', np.shape(A)) print('np.ndim(A) = ', np.ndim(A))
A: [[1 2 3] [4 5 6]] len(A) = 2 np.size(A) = 6 np.shape(A) = (2, 3) np.ndim(A) = 2
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
We could also have accessed this information with the correspondent methods:
print('A.size = ', A.size) print('A.shape = ', A.shape) print('A.ndim = ', A.ndim)
A.size = 6 A.shape = (2, 3) A.ndim = 2
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
We used the array function in Numpy to represent a matrix. A [Numpy array is in fact different than a matrix](http://www.scipy.org/NumPy_for_Matlab_Users), if we want to use explicit matrices in Numpy, we have to use the function `mat`:
B = np.mat([[1, 2, 3], [4, 5, 6]]) B
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Both array and matrix types work in Numpy, but you should choose only one type and not mix them; the array is preferred because it is [the standard vector/matrix/tensor type of Numpy](http://www.scipy.org/NumPy_for_Matlab_Users). So, let's use the array type for the rest of this text. Addition and multiplicationThe sum of two m-by-n matrices $\mathbf{A}$ and $\mathbf{B}$ is another m-by-n matrix: $$ \mathbf{A} = \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} \\a_{2,1} & a_{2,2} & a_{2,3} \end{bmatrix}\;\;\; \text{and} \;\;\;\mathbf{B} =\begin{bmatrix} b_{1,1} & b_{1,2} & b_{1,3} \\b_{2,1} & b_{2,2} & b_{2,3} \end{bmatrix}$$$$\mathbf{A} + \mathbf{B} = \begin{bmatrix} a_{1,1}+b_{1,1} & a_{1,2}+b_{1,2} & a_{1,3}+b_{1,3} \\a_{2,1}+b_{2,1} & a_{2,2}+b_{2,2} & a_{2,3}+b_{2,3} \end{bmatrix}$$In Numpy:
A = np.array([[1, 2, 3], [4, 5, 6]]) B = np.array([[7, 8, 9], [10, 11, 12]]) print('A:\n', A) print('B:\n', B) print('A + B:\n', A+B);
A: [[1 2 3] [4 5 6]] B: [[ 7 8 9] [10 11 12]] A + B: [[ 8 10 12] [14 16 18]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
The multiplication of the m-by-n matrix $\mathbf{A}$ by the n-by-p matrix $\mathbf{B}$ is a m-by-p matrix:$$ \mathbf{A} = \begin{bmatrix} a_{1,1} & a_{1,2} \\a_{2,1} & a_{2,2} \end{bmatrix}\;\;\; \text{and} \;\;\;\mathbf{B} =\begin{bmatrix} b_{1,1} & b_{1,2} & b_{1,3} \\b_{2,1} & b_{2,2} & b_{2,3} \end{bmatrix}$$$$\mathbf{A} \mathbf{B} = \begin{bmatrix} a_{1,1}b_{1,1} + a_{1,2}b_{2,1} & a_{1,1}b_{1,2} + a_{1,2}b_{2,2} & a_{1,1}b_{1,3} + a_{1,2}b_{2,3} \\a_{2,1}b_{1,1} + a_{2,2}b_{2,1} & a_{2,1}b_{1,2} + a_{2,2}b_{2,2} & a_{2,1}b_{1,3} + a_{2,2}b_{2,3}\end{bmatrix}$$In Numpy:
A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6, 7], [8, 9, 10]]) print('A:\n', A) print('B:\n', B) print('A x B:\n', np.dot(A, B));
A: [[1 2] [3 4]] B: [[ 5 6 7] [ 8 9 10]] A x B: [[21 24 27] [47 54 61]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Note that because the array type is not truly a matrix type, we used the dot product to calculate matrix multiplication. We can use the matrix type to show the equivalent:
A = np.mat(A) B = np.mat(B) print('A:\n', A) print('B:\n', B) print('A x B:\n', A*B);
A: [[1 2] [3 4]] B: [[ 5 6 7] [ 8 9 10]] A x B: [[21 24 27] [47 54 61]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Same result as before.The order in multiplication matters, $\mathbf{AB} \neq \mathbf{BA}$:
A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8]]) print('A:\n', A) print('B:\n', B) print('A x B:\n', np.dot(A, B)) print('B x A:\n', np.dot(B, A));
A: [[1 2] [3 4]] B: [[5 6] [7 8]] A x B: [[19 22] [43 50]] B x A: [[23 34] [31 46]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
The addition or multiplication of a scalar (a single number) to a matrix is performed over all the elements of the matrix:
A = np.array([[1, 2], [3, 4]]) c = 10 print('A:\n', A) print('c:\n', c) print('c + A:\n', c+A) print('cA:\n', c*A);
A: [[1 2] [3 4]] c: 10 c + A: [[11 12] [13 14]] cA: [[10 20] [30 40]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
TranspositionThe transpose of the matrix $\mathbf{A}$ is the matrix $\mathbf{A^T}$ turning all the rows of matrix $\mathbf{A}$ into columns (or columns into rows):$$ \mathbf{A} = \begin{bmatrix} a & b & c \\d & e & f \end{bmatrix}\;\;\;\;\;\;\iff\;\;\;\;\;\;\mathbf{A^T} = \begin{bmatrix} a & d \\b & e \\c & f\end{bmatrix} $$In NumPy, the transpose operator can be used as a method or function:
A = np.array([[1, 2], [3, 4]]) print('A:\n', A) print('A.T:\n', A.T) print('np.transpose(A):\n', np.transpose(A));
A: [[1 2] [3 4]] A.T: [[1 3] [2 4]] np.transpose(A): [[1 3] [2 4]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
DeterminantThe determinant is a number associated with a square matrix.The determinant of the following matrix: $$ \left[ \begin{array}{ccc}a & b & c \\d & e & f \\g & h & i \end{array} \right] $$is written as:$$ \left| \begin{array}{ccc}a & b & c \\d & e & f \\g & h & i \end{array} \right| $$And has the value:$$ (aei + bfg + cdh) - (ceg + bdi + afh) $$One way to manually calculate the determinant of a matrix is to use the [rule of Sarrus](http://en.wikipedia.org/wiki/Rule_of_Sarrus): we repeat the last columns (all columns but the first one) in the right side of the matrix and calculate the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements as illustrated in the following figure: Figure. Rule of Sarrus: the sum of the products of the solid diagonals minus the sum of the products of the dashed diagonals (image from Wikipedia). In Numpy, the determinant is computed with the `linalg.det` function:
A = np.array([[1, 2], [3, 4]]) print('A:\n', A); print('Determinant of A:\n', np.linalg.det(A))
Determinant of A: -2.0
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
IdentityThe identity matrix $\mathbf{I}$ is a matrix with ones in the main diagonal and zeros otherwise. The 3x3 identity matrix is: $$ \mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1 \end{bmatrix} $$In Numpy, instead of manually creating this matrix we can use the function `eye`:
np.eye(3) # identity 3x3 array
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
InverseThe inverse of the matrix $\mathbf{A}$ is the matrix $\mathbf{A^{-1}}$ such that the product between these two matrices is the identity matrix:$$ \mathbf{A}\cdot\mathbf{A^{-1}} = \mathbf{I} $$The calculation of the inverse of a matrix is usually not simple (the inverse of the matrix $\mathbf{A}$ is not $1/\mathbf{A}$; there is no division operation between matrices). The Numpy function `linalg.inv` computes the inverse of a square matrix: numpy.linalg.inv(a) Compute the (multiplicative) inverse of a matrix. Given a square matrix a, return the matrix ainv satisfying dot(a, ainv) = dot(ainv, a) = eye(a.shape[0]).
A = np.array([[1, 2], [3, 4]]) print('A:\n', A) Ainv = np.linalg.inv(A) print('Inverse of A:\n', Ainv);
A: [[1 2] [3 4]] Inverse of A: [[-2. 1. ] [ 1.5 -0.5]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Pseudo-inverseFor a non-square matrix, its inverse is not defined. However, we can calculate what it's known as the pseudo-inverse. Consider a non-square matrix, $\mathbf{A}$. To calculate its inverse, note that the following manipulation results in the identity matrix:$$ \mathbf{A} \mathbf{A}^T (\mathbf{A}\mathbf{A}^T)^{-1} = \mathbf{I} $$The $\mathbf{A} \mathbf{A}^T$ is a square matrix and is invertible (also [nonsingular](https://en.wikipedia.org/wiki/Invertible_matrix)) if $\mathbf{A}$ is L.I. ([linearly independent rows/columns](https://en.wikipedia.org/wiki/Linear_independence)). The matrix $\mathbf{A}^T(\mathbf{A}\mathbf{A}^T)^{-1}$ is known as the [generalized inverse or Moore–Penrose pseudoinverse](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse) of the matrix $\mathbf{A}$, a generalization of the inverse matrix.To compute the Moore–Penrose pseudoinverse, we could calculate it by a naive approach in Python:```pythonfrom numpy.linalg import invAinv = A.T @ inv(A @ A.T)```But both Numpy and Scipy have functions to calculate the pseudoinverse, which might give greater numerical stability (but read [Inverses and pseudoinverses. Numerical issues, speed, symmetry](http://vene.ro/blog/inverses-pseudoinverses-numerical-issues-speed-symmetry.html)). Of note, [numpy.linalg.pinv](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html) calculates the pseudoinverse of a matrix using its singular-value decomposition (SVD) and including all large singular values (using the [LAPACK (Linear Algebra Package)](https://en.wikipedia.org/wiki/LAPACK) routine gesdd), whereas [scipy.linalg.pinv](http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.pinv.htmlscipy.linalg.pinv) calculates a pseudoinverse of a matrix using a least-squares solver (using the LAPACK method gelsd) and [scipy.linalg.pinv2](http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.pinv2.html) also uses SVD to find the pseudoinverse (also using the LAPACK routine gesdd). For example:
from scipy.linalg import pinv2 A = np.array([[1, 0, 0], [0, 1, 0]]) Apinv = pinv2(A) print('Matrix A:\n', A) print('Pseudo-inverse of A:\n', Apinv) print('A x Apinv:\n', A@Apinv)
Matrix A: [[1 0 0] [0 1 0]] Pseudo-inverse of A: [[ 1. 0.] [ 0. 1.] [ 0. 0.]] A x Apinv: [[ 1. 0.] [ 0. 1.]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
OrthogonalityA square matrix is said to be orthogonal if:1. There is no linear combination of one of the lines or columns of the matrix that would lead to the other row or column. 2. Its columns or rows form a basis of (independent) unit vectors (versors).As consequence:1. Its determinant is equal to 1 or -1.2. Its inverse is equal to its transpose.However, keep in mind that not all matrices with determinant equals to one are orthogonal, for example, the matrix:$$ \begin{bmatrix}3 & 2 \\4 & 3 \end{bmatrix} $$Has determinant equals to one but it is not orthogonal (the columns or rows don't have norm equals to one). Linear equations> A linear equation is an algebraic equation in which each term is either a constant or the product of a constant and (the first power of) a single variable ([Wikipedia](http://en.wikipedia.org/wiki/Linear_equation)).We are interested in solving a set of linear equations where two or more variables are unknown, for instance:$$ x + 2y = 4 $$$$ 3x + 4y = 10 $$Let's see how to employ the matrix formalism to solve these equations (even that we know the solution is `x=2` and `y=1`). Let's express this set of equations in matrix form:$$ \begin{bmatrix} 1 & 2 \\3 & 4 \end{bmatrix}\begin{bmatrix} x \\y \end{bmatrix}= \begin{bmatrix} 4 \\10 \end{bmatrix}$$And for the general case:$$ \mathbf{Av} = \mathbf{c} $$Where $\mathbf{A, v, c}$ are the matrices above and we want to find the values `x,y` for the matrix $\mathbf{v}$. Because there is no division of matrices, we can use the inverse of $\mathbf{A}$ to solve for $\mathbf{v}$:$$ \mathbf{A}^{-1}\mathbf{Av} = \mathbf{A}^{-1}\mathbf{c} \implies $$$$ \mathbf{v} = \mathbf{A}^{-1}\mathbf{c} $$As we know how to compute the inverse of $\mathbf{A}$, the solution is:
A = np.array([[1, 2], [3, 4]]) Ainv = np.linalg.inv(A) c = np.array([4, 10]) v = np.dot(Ainv, c) print('v:\n', v)
v: [ 2. 1.]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
What we expected.However, the use of the inverse of a matrix to solve equations is computationally inefficient. Instead, we should use `linalg.solve` for a determined system (same number of equations and unknowns) or `linalg.lstsq` otherwise: From the help for `solve`: numpy.linalg.solve(a, b)[source] Solve a linear matrix equation, or system of linear scalar equations. Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
v = np.linalg.solve(A, c) print('Using solve:') print('v:\n', v)
Using solve: v: [ 2. 1.]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
And from the help for `lstsq`: numpy.linalg.lstsq(a, b, rcond=-1)[source] Return the least-squares solution to a linear matrix equation. Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.
v = np.linalg.lstsq(A, c)[0] print('Using lstsq:') print('v:\n', v)
Using lstsq: v: [ 2. 1.]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Same solutions, of course.When a system of equations has a unique solution, the determinant of the **square** matrix associated to this system of equations is nonzero. When the determinant is zero there are either no solutions or many solutions to the system of equations.But if we have an overdetermined system:$$ x + 2y = 4 $$$$ 3x + 4y = 10 $$$$ 5x + 6y = 15 $$(Note that the possible solution for this set of equations is not exact because the last equation should be equal to 16.)Let's try to solve it:
A = np.array([[1, 2], [3, 4], [5, 6]]) print('A:\n', A) c = np.array([4, 10, 15]) print('c:\n', c);
A: [[1 2] [3 4] [5 6]] c: [ 4 10 15]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Because the matix $\mathbf{A}$ is not squared, we can calculate its pseudo-inverse or use the function `linalg.lstsq`:
v = np.linalg.lstsq(A, c)[0] print('Using lstsq:') print('v:\n', v)
Using lstsq: v: [ 1.3333 1.4167]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
The functions `inv` and `solve` failed because the matrix $\mathbf{A}$ was not square (overdetermined system). The function `lstsq` not only was able to handle an overdetermined system but was also able to find the best approximate solution.And if the the set of equations was undetermined, `lstsq` would also work. For instance, consider the system:$$ x + 2y + 2z = 10 $$$$ 3x + 4y + z = 13 $$And in matrix form:$$ \begin{bmatrix} 1 & 2 & 2 \\3 & 4 & 1 \end{bmatrix}\begin{bmatrix} x \\y \\z \end{bmatrix}= \begin{bmatrix} 10 \\13 \end{bmatrix}$$A possible solution would be `x=2,y=1,z=3`, but other values would also satisfy this set of equations.Let's try to solve using `lstsq`:
A = np.array([[1, 2, 2], [3, 4, 1]]) print('A:\n', A) c = np.array([10, 13]) print('c:\n', c); v = np.linalg.lstsq(A, c)[0] print('Using lstsq:') print('v:\n', v);
Using lstsq: v: [ 0.8 2. 2.6]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
# Python program to generate embedding (word vectors) using Word2Vec # importing necessary modules for embedding !pip install --upgrade gensim !pip install rdflib import rdflib !pip uninstall numpy !pip install numpy # pip install numpy and then hit the RESTART RUNTIME import gensim from gensim.models import Word2Vec from gensim.models import KeyedVectors from gensim.scripts.glove2word2vec import glove2word2vec import collections from collections import Counter from rdflib import Graph, URIRef, Namespace from google.colab import drive drive.mount('/content/drive') # check out if google dride mount suceessful !ls "/content/drive/My Drive/MonirResearchDatasets" # a funtion for ga-themes extraction from GA-rdf repository separate and return a list all the ga-themes - Monir def gaThemesExtraction(ga_record): gaThemes = [] with open(ga_record, 'rt') as f: data = f.readlines() for line in data: # check if line contains "ga-themes" sub-string if line.__contains__('ga-themes'): # split the line contains from "ga-themes" sub-string stringTemp = line.split("ga-themes/",1)[1] # further split the line contains from "ga-themes" sub-string to delimiter stringTemp = stringTemp.split('>')[0] gaThemes.append(stringTemp) #print(dataLog) #print(gaThemes[:9]) #print(len(gaThemes)) return gaThemes # a funtion imput a list of ga-themes and return a list of unique ga-themes and another list of duplicate gaThemes - def make_unique_gaThemes(list_all_ga_themes): # find a a list of unique ga-themes unique_gaThemes = [] unique_gaThemes = list(dict.fromkeys(gaThemes)) #print(len(unique_gaThemes)) # a list of duplicate gaThemes duplicate_gaThemes = [] duplicate_gaThemes = [item for item, count in collections.Counter(gaThemes).items() if count > 1] #print(len(duplicate_gaThemes)) return unique_gaThemes, duplicate_gaThemes ## KG-Embeddings filename = '/content/drive/My Drive/MonirResearchDatasets/Freebase-GoogleNews-vectors.bin' model = KeyedVectors.load_word2vec_format(filename, binary=True) def embedding_word_clusters(model, list_of_ga_themes, cluster_size): keys = list_of_ga_themes embedding_model = model n = cluster_size new_classifier = [] embedding_clusters = [] classifier_clusters = [] for word in keys: embeddings = [] words = [] # check if a word is fully "OOV" (out of vocabulary) for pre-trained embedding model if word in embedding_model.key_to_index: # create a new list of classifier new_classifier.append(word) # find most similar top n words from the pre-trained embedding model for similar_word, _ in embedding_model.most_similar(word, topn=n): words.append(similar_word) embeddings.append(embedding_model[similar_word]) embedding_clusters.append(embeddings) classifier_clusters.append(words) return embedding_clusters, classifier_clusters, new_classifier # to get all the ga-themes from all1K file ga_record_datapath = "/content/drive/My Drive/MonirResearchDatasets/surround-ga-records/all1k.ttl.txt" gaThemes = gaThemesExtraction(ga_record_datapath) print(gaThemes[:10]) print(len(gaThemes)) # to get all unique ga-themes unique_gaThemes, duplicate_gaThemes = make_unique_gaThemes(gaThemes) print(unique_gaThemes[:100]) #print(duplicate_gaThemes[:100]) print(len(unique_gaThemes)) embedding_clusters, classifier_clusters, new_classifier = embedding_word_clusters(model, unique_gaThemes[:10], 10) print(classifier_clusters) print(new_classifier) print(classifier_clusters[:2]) print(new_classifier[:2]) from rdflib import Graph g = Graph() g.parse("/content/drive/My Drive/MonirResearchDatasets/surround-ga-records/ga-records.ttl", format='turtle') print(len(g)) n_record = Namespace("http://example.com/record/") # <http://example.com/record/105030> n_GA = Namespace("http://example.org/def/ga-themes/") n_hasClassifier = Namespace("http://data.surroundaustralia.com/def/agr#") hasClassifier = "hasClassifier" #record = [] for obj in new_classifier[:1]: # for obj in new_classifier: results = g.query( """ PREFIX classifier: <http://data.surroundaustralia.com/def/agr#> PREFIX ga-themes: <http://example.org/def/ga-themes/> SELECT ?s WHERE { ?s classifier:hasClassifier ga-themes:""" + obj + """ } """) record = [] pos = new_classifier.index(obj) for row in results: # print(f"{row.s}") record.append(row.s) # adding classifier from classifier cluster to each of the list of records for classifier_obj in classifier_clusters[pos]: for record_data in record: g.add((record_data, n_hasClassifier.hasClassifier, n_GA[classifier_obj])) # adding classifier from classifier cluster to the list of records for q in record: g.add((record[q], n_hasClassifier.hasClassifier, n_GA[classifier_clusters[1]])) print(record) print(new_classifier) print(new_classifier.index('palaeontology')) print(classifier_clusters[0]) print(len(record)) print(len(record)) print(len(classifier_clusters)) a = [[1, 3, 4], [2, 4, 4], [3, 4, 5]] for recordlist in record: print(recordlist) for number in recordlist: print(number)
_____no_output_____
MIT
embedding_word_clusters2.ipynb
mzkhan2000/KG-Embeddings
Inference from the analysis: All the above variables show positive skewness; while Age & Mean_distance_from_home are leptokurtic and all other variables are platykurtic. The Mean_Monthly_Income’s IQR is at 54K suggesting company wide attrition across all income bands Mean age forms a near normal distribution with 13 years of IQR Outliers:There’s no regression found while plotting Age, MonthlyIncome, TotalWorkingYears, YearsAtCompany, etc., on a scatter plot
box_plot=dataset1.Age plt.boxplot(box_plot)
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Age is normally distributed without any outliers
box_plot=dataset1.MonthlyIncome plt.boxplot(box_plot)
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Monthly Income is Right skewed with several outliers
box_plot=dataset1.YearsAtCompany plt.boxplot(box_plot)
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Years at company is also Right Skewed with several outliers observed. Attrition Vs Distance from Home
from scipy.stats import mannwhitneyu from scipy.stats import mannwhitneyu a1=dataset.DistanceFromHome_Yes a2=dataset.DistanceFromHome_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 3132625.5 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value of 0.0 is < 0.05, the H0 is rejected and Ha is accepted.H0: There is no significant differences in the Distance From Home between attrition (Y) and attirition (N)Ha: There is significant differences in the Distance From Home between attrition (Y) and attirition (N) Attrition Vs Income
a1=dataset.MonthlyIncome_Yes a2=dataset.MonthlyIncome_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 3085416.0 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the income between attrition (Y) and attirition (N)Ha: There is significant differences in the income between attrition (Y) and attirition (N) Attrition Vs Total Working Years
a1=dataset.TotalWorkingYears_Yes a2=dataset.TotalWorkingYears_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 2760982.0 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Total Working Years between attrition (Y) and attirition (N)Ha: There is significant differences in the Total Working Years between attrition (Y) and attirition (N) Attrition Vs Years at company
a1=dataset.YearsAtCompany_Yes a2=dataset.YearsAtCompany_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 2882047.5 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Years At Company between attrition (Y) and attirition (N) Ha: There is significant differences in the Years At Company between attrition (Y) and attirition (N) Attrition Vs YearsWithCurrentManager
a1=dataset.YearsWithCurrManager_Yes a2=dataset.YearsWithCurrManager_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 3674749.5 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Years With Current Manager between attrition (Y) and attirition (N)Ha: There is significant differences in the Years With Current Manager between attrition (Y) and attirition (N) Statistical Tests (Separate T Test) Attrition Vs Distance From Homefrom scipy.stats import ttest_ind
z1=dataset.DistanceFromHome_Yes z2=dataset.DistanceFromHome_No stat, p=ttest_ind(z2,z1) print(stat, p) 44.45445917636664 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Distance From Home between attrition (Y) and attirition (N)Ha: There is significant differences in the Distance From Home between attrition (Y) and attirition (N) Attrition Vs Income
z1=dataset.MonthlyIncome_Yes z2=dataset.MonthlyIncome_No stat, p=ttest_ind(z2, z1) print(stat, p) 52.09279408504947 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Monthly Income between attrition (Y) and attirition (N) Ha: There is significant differences in the Monthly Income between attrition (Y) and attirition (N) Attrition Vs Yeats At Company
z1=dataset.YearsAtCompany_Yes z2=dataset.YearsAtCompany_No stat, p=ttest_ind(z2, z1) print(stat, p) 51.45296941515692 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Years At Company between attrition (Y) and attirition (N)Ha: There is significant differences in the Years At Company between attrition (Y) and attirition (N) Attrition Vs Years With Current Manager
z1=dataset.YearsWithCurrManager_Yes z2=dataset.YearsWithCurrManager_No stat, p=ttest_ind(z2, z1) print(stat, p) 53.02424349024521 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Convolutional NetworksSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
# As usual, a bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape)
X_train: (49000, 3, 32, 32) y_train: (49000,) X_val: (1000, 3, 32, 32) y_val: (1000,) X_test: (1000, 3, 32, 32) y_test: (1000,)
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Convolution: Naive forward passThe core of a convolutional network is the convolution operation. In the file `cs231n/layers.py`, implement the forward pass for the convolution layer in the function `conv_forward_naive`. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.You can test your implementation by running the following:
x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781], [-0.18387192, -0.2109216 ]], [[ 0.21027089, 0.21661097], [ 0.22847626, 0.23004637]], [[ 0.50813986, 0.54309974], [ 0.64082444, 0.67101435]]], [[[-0.98053589, -1.03143541], [-1.19128892, -1.24695841]], [[ 0.69108355, 0.66880383], [ 0.59480972, 0.56776003]], [[ 2.36270298, 2.36904306], [ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around e-8 print('Testing conv_forward_naive') print('difference: ', rel_error(out, correct_out)) a = np.array([[[1,2,3], [3,2,5]],[[1,2,3], [3,2,5]],[[1,2,3], [3,2,5]]]) #np.pad(a, 2, 'constant') image_pad = np.array([np.pad(channel, 1 , 'constant') for channel in x]) image_pad.shape out.shape #w[0,:, 0:4, 0:4].shape
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017