Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
---|---|---|---|---|---|---|
378,200 | 62,281,292 | How to convert only one axis when constructing a dataframe from a JSON string? | <p>The <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_json.html" rel="nofollow noreferrer">read_json</a> function has an argument <code>convert_axes</code>.</p>
<p>The problem is that for my data the column labels MUST NOT be converted (i.e. keep them as strings), but the index MUST be converted.</p>
<p>My dumb solution is to parse the string twice. Surely there is a better way?</p>
<pre><code>json_str = '{"1": {"1970-01-02 00:00:00": "foo"}}'
temp = pd.read_json(json_str, convert_axes=False)
want = pd.read_json(json_str, convert_axes=True)
want.columns = temp.columns
</code></pre>
<p><code>json_str</code> always comes in the format <code>{column -> {index -> value}}</code>, i.e. <code>orient='columns'</code>. The index does not have to be in datetime format, it could be an integer index, or something else.</p> | <p>Judging by the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html" rel="nofollow noreferrer">documentation</a> and the <a href="https://github.com/pandas-dev/pandas/blob/89c5a5941c4819e00a04d3f3722f9f5c9cf046f0/pandas/io/json/_json.py" rel="nofollow noreferrer">source code</a>, I don't think there is a way to apply <code>convert_axes</code> to just one axis.</p>
<p>I'm not sure this is any better than your own solution:</p>
<pre><code>import pandas as pd
json_str = '{"1": {"1970-01-02 00:00:00": "foo"}}'
df = pd.read_json(json_str, convert_axes=False)
df.index = pd.to_datetime(df.index)
</code></pre>
<p>Edit: I misunderstood the question the first time. Here's another go, which as requested leaves the column labels as strings but tries to convert the index:</p>
<pre><code>import pandas as pd
def read_json_convert_index(json_str,dtypes=['int','float']):
'''
Leaves the columns untouched but tries to convert the index to datetime
and then subsequentially to the types provided in the list dtypes
'''
df = pd.read_json(json_str,convert_axes=False)
try:
df.index = pd.to_datetime(df.index)
return df
except:
for dtype in dtypes:
try:
df.index = df.index.astype(dtype)
# check if floats are actually just integers in disguise
if dtype == 'float' and all(
[abs(i - int(i)) <= 0.1**10 for i in df.index]):
df.index = df.index.astype('int')
return df
else: return df
except: continue
return df
</code></pre>
<p>Subsequent edit: As far I can see from the <a href="https://github.com/pandas-dev/pandas/blob/89c5a5941c4819e00a04d3f3722f9f5c9cf046f0/pandas/io/json/_json.py" rel="nofollow noreferrer">source code</a> and from experimentation, <code>convert_axes</code> tries to cast each axis as either a timestamp, an integer or a float, although I may well have overlooked something. Incidentally, through this experimentation I found some potentially unexpected (unwanted?) behaviour: If you run this...</p>
<pre><code>import pandas as pd
json_str = '{"1": {"1.0": "foo"},"2": {"2.0": "bar"}}'
df = pd.read_json(json_str, convert_axes=True)
</code></pre>
<p>... then the axis is converted to a DatetimeIndex <code>['1970-01-01 00:00:01', '1970-01-01 00:00:02']</code>. I think the reason for this is that the float <code>1.0</code> is interpreted as the timestamp <code>1970-01-01 00:00:01</code>. The function <code>read_json_convert_index</code> defined above does <strong>not</strong> do this, as it tries to cast the <em>string</em> <code>'1.0'</code> as a timestamp, which fails.</p>
<p>As for the condition <code>abs(i - int(i)) <= 0.1**10</code>: This checks whether the floats are very close to integer values and thus can be safely cast as integers. For instance, the code</p>
<pre><code>import pandas as pd
json_str = '{"1": {"1.0": "foo"},"2": {"2.0": "bar"}}'
df = read_json_convert_index(json_str)
</code></pre>
<p>produces the index <code>[1, 2]</code>, rather than <code>[1.0, 2.0]</code>.</p>
<p>Just a general point: I think one should be wary with automatic type conversion, since it can lead to unexpected behaviour, as demonstrated above.</p> | python|json|pandas | 2 |
378,201 | 62,318,260 | How can I count words based on the column? | <p><img src="https://i.stack.imgur.com/ZsTRy.png" alt="enter image description here"></p>
<p>Hello. I stuck here.
Could you tell me how can I count words based on the tags on the second column?</p>
<p>I want to find mostly used words using .most_common() using the categorize: most 10 in VB(Verb), 10 in Noun. </p> | <p>To spell out what Ari Cooper-Davis suggested:</p>
<pre><code>pos.loc[pos.tag == 'VBN'].word.value_counts()
pos.loc[pos.tag == 'TO'].word.value_counts()
etc.
</code></pre> | python|pandas|dataframe|nltk|part-of-speech | 1 |
378,202 | 62,201,977 | Removing rows that does not start with/contain specific words | <p>I have the following output</p>
<pre><code>Age
'1 year old',
'14 years old',
'music store',
'7 years old ',
'16 years old ',
</code></pre>
<p>created after using this line of code</p>
<pre><code>df['Age']=df['Age'].str.split('.', expand=True,n=0)[0]
df['Age'].tolist()
</code></pre>
<p>I would like to remove rows from the dataset (it would be better using a copy of it or a new one after filtering it) that does not start with a number or a number + year + old or a number + years + old. </p>
<p>Expected output</p>
<pre><code>Age (in a new dataset filtered)
'1 year old',
'14 years old',
'7 years old ',
'16 years old ',
</code></pre>
<p>How could I do?</p> | <p>Use, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a> and create a boolean mask to filter the dataframe:</p>
<pre><code>m = df['Age'].str.contains(r'(?i)^\d+\syears?\sold')
df1 = df[m]
</code></pre>
<p>Result:</p>
<pre><code># print(df1)
Age
0 1 year old
1 14 years old
3 7 years old
4 16 years old
</code></pre>
<p>You can test the regex pattern <a href="https://regex101.com/r/pbUBvJ/1" rel="nofollow noreferrer"><code>here</code></a>.</p> | python|regex|pandas|dataframe | 1 |
378,203 | 62,345,852 | how to calculate a running total in a pandas dataframe | <p>I have a data frame that contains precipitation data that looks like this</p>
<pre><code>Date Time, Raw Measurement, Site ID, Previous Raw Measurement, Raw - Previous
2020-05-06 14:15:00,12.56,8085,12.56,0.0
2020-05-06 14:30:00,12.56,8085,12.56,0.0
2020-05-06 14:45:00,12.56,8085,12.56,0.0
2020-05-06 15:00:00,2.48,8085,12.56,-10.08
2020-05-06 15:30:00,2.48,8085,2.47,0.01
2020-05-06 15:45:00,2.48,8085,2.48,0.0
2020-05-06 16:00:00,2.50,8085,2.48,0.02
2020-05-06 16:15:00,2.50,8085,2.50,0.0
2020-05-06 16:30:00,2.50,8085,2.50,0.0
2020-05-06 16:45:00,2.51,8085,2.50,0.01
2020-05-06 17:00:00,2.51,8085,2.51,0.0
</code></pre>
<p>I would like to use the last column 'Raw - Previous', which is simply the difference between the most recent observation and the previous observation, to create a running total of the positive changes to make an accumulation column. From time to time I have to empty out the rain gauge so the 'Raw - Previous' will be negative when that occurs and I would like to filter this out of my df while keeping a tally of the total accumulation of the gauge. I've come across solutions that use
<code>df.sum()</code>
but from what I can gather, they only provide the total sum of the entire column and not the running total after each row.</p>
<p>In all my goal is to have something like this</p>
<pre><code>Date Time, Raw Measurement, Site ID, Previous Raw Measurement, Raw - Previous, Total Accumulation
2020-05-06 14:15:00,12.56,8085,12.56,0.0,12.56
2020-05-06 14:30:00,12.56,8085,12.56,0.0,12.56
2020-05-06 14:45:00,12.56,8085,12.56,0.0,12.56
2020-05-06 15:00:00,2.48,8085,12.56,-10.08,12.56
2020-05-06 15:15:00,2.47,8085,2.48,-0.01,12.56
2020-05-06 15:30:00,2.48,8085,2.47,0.01,12.57
2020-05-06 15:45:00,2.48,8085,2.48,0.0,12.57
2020-05-06 16:00:00,2.50,8085,2.48,0.02,12.59
2020-05-06 16:15:00,2.50,8085,2.50,0.0,12.59
2020-05-06 16:30:00,2.50,8085,2.50,0.0,12.59
2020-05-06 16:45:00,2.51,8085,2.50,0.01,12.60
2020-05-06 17:00:00,2.51,8085,2.51,0.0,12.60
</code></pre>
<p>EDIT: Changed title to better reflect what the question became</p> | <p><code>np.where</code> will also do the job.</p>
<pre><code>import pandas as pd, numpy as np
df['Total Accumulation'] = np.where((df['Raw - Previous'] > 0), df['Raw - Previous'], 0).cumsum() + df.iloc[0,3]
df
</code></pre>
<p>Output:</p>
<pre><code> Date Time Raw Measurement Site ID Previous Raw Measurement Raw - Previous Total Accumulation
0 2020-05-06 14:15:00 12.56 8085 12.56 0.00 12.56
1 2020-05-06 14:30:00 12.56 8085 12.56 0.00 12.56
2 2020-05-06 14:45:00 12.56 8085 12.56 0.00 12.56
3 2020-05-06 15:00:00 2.48 8085 12.56 -10.08 12.56
4 2020-05-06 15:30:00 2.48 8085 2.47 0.01 12.57
5 2020-05-06 15:45:00 2.48 8085 2.48 0.00 12.57
6 2020-05-06 16:00:00 2.50 8085 2.48 0.02 12.59
7 2020-05-06 16:15:00 2.50 8085 2.50 0.00 12.59
8 2020-05-06 16:30:00 2.50 8085 2.50 0.00 12.59
9 2020-05-06 16:45:00 2.51 8085 2.50 0.10 12.69
10 2020-05-06 17:00:00 2.51 8085 2.51 0.00 12.69
</code></pre> | python|pandas | 1 |
378,204 | 62,225,230 | Consistent ColumnTransformer for intersecting lists of columns | <p>I want to use <code>sklearn.compose.ColumnTransformer</code> consistently (not parallel, so, the second transformer should be executed only after the first) for intersecting lists of columns in this way:</p>
<pre><code>log_transformer = p.FunctionTransformer(lambda x: np.log(x))
df = pd.DataFrame({'a': [1,2, np.NaN, 4], 'b': [1,np.NaN, 3, 4], 'c': [1 ,2, 3, 4]})
compose.ColumnTransformer(n_jobs=1,
transformers=[
('num', impute.SimpleImputer() , ['a', 'b']),
('log', log_transformer, ['b', 'c']),
('scale', p.StandardScaler(), ['a', 'b', 'c'])
]).fit_transform(df)
</code></pre>
<p>So, I want to use <code>SimpleImputer</code> for <code>'a'</code>, <code>'b'</code>, then <code>log</code> for <code>'b'</code>, <code>'c'</code>, and then <code>StandardScaler</code> for <code>'a'</code>, <code>'b'</code>, <code>'c'</code>.</p>
<p>But:</p>
<ol>
<li>I get array of <code>(4, 7)</code> shape.</li>
<li>I still get <code>Nan</code> in <code>a</code> and <code>b</code> columns.</li>
</ol>
<p>So, how can I use <code>ColumnTransformer</code> for different columns in the manner of <code>Pipeline</code>?</p>
<p><strong>UPD:</strong></p>
<pre><code>pipe_1 = pipeline.Pipeline(steps=[
('imp', impute.SimpleImputer(strategy='constant', fill_value=42)),
])
pipe_2 = pipeline.Pipeline(steps=[
('imp', impute.SimpleImputer(strategy='constant', fill_value=24)),
])
pipe_3 = pipeline.Pipeline(steps=[
('scl', p.StandardScaler()),
])
# in the real situation I don't know exactly what cols these arrays contain, so they are not static:
cols_1 = ['a']
cols_2 = ['b']
cols_3 = ['a', 'b', 'c']
proc = compose.ColumnTransformer(remainder='passthrough', transformers=[
('1', pipe_1, cols_1),
('2', pipe_2, cols_2),
('3', pipe_3, cols_3),
])
proc.fit_transform(df).T
</code></pre>
<p>Output:</p>
<pre><code>array([[ 1. , 2. , 42. , 4. ],
[ 1. , 24. , 3. , 4. ],
[-1.06904497, -0.26726124, nan, 1.33630621],
[-1.33630621, nan, 0.26726124, 1.06904497],
[-1.34164079, -0.4472136 , 0.4472136 , 1.34164079]])
</code></pre>
<p>I understood why I have cols duplicates, <code>nans</code> and not scaled values, but how can I fix this in the correct way when cols are not static? </p>
<p><strong>UPD2:</strong></p>
<p>A problem may arise when the columns change their order. So, I want to use <code>FunctionTransformer</code> for columns selection:</p>
<pre><code>def select_col(X, cols=None):
return X[cols]
ct1 = compose.make_column_transformer(
(p.OneHotEncoder(), p.FunctionTransformer(select_col, kw_args=dict(cols=['a', 'b']))),
remainder='passthrough'
)
ct1.fit(df)
</code></pre>
<p>But get this output: </p>
<blockquote>
<p>ValueError: No valid specification of the columns. Only a scalar, list or slice of all integers or all strings, or boolean mask is allowed</p>
</blockquote>
<p>How can I fix it?</p> | <p>The intended usage of <code>ColumnTransformer</code> is that the different transformers are applied in parallel, not sequentially. To accomplish your desired outcome, three approaches come to mind:</p>
<p><strong>First approach:</strong></p>
<pre class="lang-py prettyprint-override"><code>pipe_a = Pipeline(steps=[('imp', SimpleImputer()),
('scale', StandardScaler())])
pipe_b = Pipeline(steps=[('imp', SimpleImputer()),
('log', log_transformer),
('scale', StandardScaler())])
pipe_c = Pipeline(steps=[('log', log_transformer),
('scale', StandardScaler())])
proc = ColumnTransformer(transformers=[
('a', pipe_a, ['a']),
('b', pipe_b, ['b']),
('c', pipe_c, ['c'])]
)
</code></pre>
<p><strong>This second one actually won't work</strong>, because the <code>ColumnTransformer</code> will rearrange the columns and forget the names*, so that the later ones will fail or apply to the wrong columns. When sklearn finalizes how to pass along dataframes or feature names, this may be salvaged, or you may be able to tweak it for your specific usecase now. (* ColumnTransformer does already have a <code>get_feature_names</code>, but the actual data passed through the pipeline doesn't have that information.)</p>
<pre class="lang-py prettyprint-override"><code>imp_tfm = ColumnTransformer(
transformers=[('num', impute.SimpleImputer() , ['a', 'b'])],
remainder='passthrough'
)
log_tfm = ColumnTransformer(
transformers=[('log', log_transformer, ['b', 'c'])],
remainder='passthrough'
)
scl_tfm = ColumnTransformer(
transformers=[('scale', StandardScaler(), ['a', 'b', 'c'])
)
proc = Pipeline(steps=[
('imp', imp_tfm),
('log', log_tfm),
('scale', scl_tfm)]
)
</code></pre>
<p><strong>Third</strong>, there may be a way to use the <code>Pipeline</code> slicing feature to have one "master" pipeline that you cut down for each feature... this would work mostly like the first approach, might save some coding in the case of larger pipelines, but seems a little hacky. For example, here you can:</p>
<pre class="lang-py prettyprint-override"><code>pipe_a = clone(pipe_b)[1:]
pipe_c = clone(pipe_b)
pipe_c.steps[1] = ('nolog', 'passthrough')
</code></pre>
<p>(Without cloning or otherwise deep-copying <code>pipe_b</code>, the last line would change both <code>pipe_c</code> and <code>pipe_b</code>. The slicing mechanism returns a copy, so <code>pipe_a</code> doesn't strictly need to be cloned, but I've left it in to feel safer. Unfortunately you can't provide a discontinuous slice, so <code>pipe_c = pipe_b[0,2]</code> doesn't work, but you <em>can</em> set the individual slices as I've done above to <code>"passthrough"</code> to disable them.)</p> | python|pandas|scikit-learn|scipy|sklearn-pandas | 5 |
378,205 | 62,341,182 | FutureWarning: elementwise comparison failed; when dropping all rows from pandas dataframe | <p>I want to drop those rows in a dataframe that have value '0' in the column 'candidate'. Some of my dataframes only have value '0' in this column. I expected that in this case I will get an empty dataframe, but instead I get the following warning and the unchanged dataframe. How can I get an empty dataframe in this case? Or prevent returning an unchanged dataframe?</p>
<p>Warning message:</p>
<p><em>C:\Users\User\Anaconda3\lib\site-packages\pandas\core\ops\array_ops.py:253: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
res_values = method(rvalues)</em></p>
<p>My code:</p>
<pre><code>with open(filename, encoding='utf-8') as file:
df = pd.read_csv(file, sep=',')
df.drop(df.index[(df['candidate'] == '0')], inplace=True)
print(df)
post id ... candidate
0 1 ... 0
1 1 ... 0
2 1 ... 0
3 1 ... 0
4 1 ... 0
.. ... ... ...
182 10 ... 0
183 10 ... 0
184 10 ... 0
185 10 ... 0
186 10 ... 0
[187 rows x 4 columns]
</code></pre> | <p>Thanks everyone for your suggestions!</p>
<p>Indeed, the value type is <code>int</code>, but only if 0 is the only value in the column. Where other values are present, the type is <code>object</code>.</p>
<p>So I solved the problem by using:</p>
<p><code>df = df.loc[(df["candidate"] != "0") & (df["candidate"] != 0)]</code></p> | python|pandas|dataframe | 1 |
378,206 | 62,305,744 | Numpy split array without copying | <p>I have a very large array of images (multiple GBs) and want to split it using numpy. This is my code:</p>
<pre><code>images = ... # this is the very large array which contains a lot of images.
images.shape => (50000, 256, 256)
indices = ... # array containing ranges, that group the images array like [(0, 300), (301, 580), (581, 860), ...]
train_indices, test_indices = ... # both arrays contain indices like [1, 6, 8, 19] which determine which groups are in the train and which are in the test group
images_train, images_test = np.empty([0, images.shape[1], images.shape[2]]), np.empty([0, images.shape[1], images.shape[2]])
# assign the image groups to either train or test set
for (i, rng) in enumerate(indices):
group_range = range(rng[0], rng[1]+1)
if i in train_indices:
images_train = np.concatenate((images_train, images[group_range]))
else:
images_test = np.concatenate((images_test, images[group_range]))
</code></pre>
<p>The problem with this code is, that <code>images_train</code> and <code>images_test</code> are new arrays and the single images are always copied in this new array. This leads to double the memory needed to run the program.</p>
<p>Is there a way to split my <code>images</code> array into <code>images_train</code> and <code>images_test</code> without having to copy the images, but rather reuse them?</p>
<p>My intention with the indices is to group the images into roughly 150 groups, where images from one group should be either in the train or test set</p> | <p>Without a running code it's difficult to understand the details. But I can try to give some ideas. If you have <code>images_train</code> and <code>images_test</code> then you will probabely use them to train and to test with a command that is something like</p>
<pre><code>.fit(images_train);
.score(images_test)
</code></pre>
<p>An approach might be that you do not build <code>images_train</code> and <code>images_test</code> but that you use part of <code>images</code> directely</p>
<pre><code>.fit(images[...]);
.score(images[...])
</code></pre>
<p>Now the question is, what should be in the <code>[...]</code>-brackets ? Or is there a numpy operater that extracts the right <code>images[...]</code>. First we have to think about <strong>what we should avoid</strong>:</p>
<ul>
<li>for loop is always slow</li>
<li>iterative filling of an array like <code>A = np.concatenate((A, B[j]))</code> is always slow</li>
<li>Python's "fancy indexing" is always slow, as <code>group_range = range(rng[0], rng[1]+1); images[group_range]</code></li>
</ul>
<p><strong>Some ideas</strong>:</p>
<ul>
<li>use slices instead of "fancy indexing" <a href="https://stackoverflow.com/questions/45290102/is-2-dimensional-numpy-take-fast">see here</a></li>
<li>images[rng[0] : rng[1]+1] , or</li>
<li><p>group_range = slice(rng[0] , rng[1]+1); images[group_range]</p></li>
<li><p>Is <code>images_train = images[train_indices, :, :]</code> and <code>images_test = images[test_indices, :, :]</code> ?</p></li>
<li>images.shape => (50000, 256, 256) is 3-dimensional ? </li>
<li>try wether <code>numpy.where</code> can give some assitance</li>
<li>below the methods I've mentioned</li>
</ul>
<p>...</p>
<pre><code>import numpy as np
A = np.arange(20); print("A =",A)
B = A[5:16:2]; print("B =",B) # view of A only, faster
j = slice(5, 16, 2); C = A[j]; print("C =",C) # view of A only, faster
k = [2, 4, 8, 12]; D = A[k]; print("D =",D) # generates internal copies
A = [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
B = [ 5 7 9 11 13 15]
C = [ 5 7 9 11 13 15]
D = [ 2 4 8 12]
</code></pre> | python|numpy|training-data|train-test-split | 1 |
378,207 | 62,184,437 | Reshaping a numpy vector | <p>I am really new to numpy. I have a numpy vector that when I run <code>y.shape</code> returns <code>(4000,)</code>. Is there a way, I can have it return <code>(4000, 1)</code>?</p> | <pre><code>np.reshape(y,(4000,1))
</code></pre>
<p>Reshape function can be used to do this</p> | numpy|vector | 0 |
378,208 | 62,088,979 | Easiest way to print the head of a data in python? | <p>I'm not defining my array with pandas, I'm using numpy to do it and I would like to know if there is any other way to print the first 5 rows of a data. Using pandas this is how I would do it: print(data.head()).</p>
<p>This is how i defined my data:</p>
<pre><code>with open('B0_25.txt', 'r') as simulation_data:
simulation_data = [x.strip() for x in simulation_data if x.strip()]
data = [tuple(map(float, x.split())) for x in simulation_data[2:100]]
x = [x[1] for x in data]
y = [x[2] for x in data]
z = [x[3] for x in data]
mx = [x[4] for x in data]
my = [x[5] for x in data]
mz = [x[6] for x in data]
mydata = np.array([x, y, z, mx, my, mz])
</code></pre> | <p>You need the transpose of mydata, otherwise x, y, z, mx, my, mz are the rows rather than the columns.</p>
<pre><code>mydata = np.array([x, y, z, mx, my, mz]).T
print(mydata[:5, :])
</code></pre> | python|pandas|head | 1 |
378,209 | 62,440,732 | print array as a matrix by having all elements in the right columns | <p>I am trying to print my dataframe as a matrix. To do so, I want to use an array. To be clear:</p>
<p>I have a dictionary, Y, which is like this:</p>
<pre><code>{(0, 0): {(0, 0): 0, (1, 0): 1, (0, 1): 1, (0, 2): 2, (0, 3): 3, (1, 3): 4, (0, 4): 10, (1, 4): 9, (0, 5): 11, (1, 1): 2, (1, 2): 5, (2, 2): 6, (2, 4): 8, (1, 5): 10, (2, 0): 10, (3, 0): 9, (2, 1): 7, (3, 1): 8, (3, 2): 7, (2, 3): 7, (3, 4): 9, (2, 5): 9, (3, 5): 10, (3, 3): 8}, (1, 0): {(1, 0): 0, (0, 0): 1, (1, 1): 1, (0, 1): 2, (0, 2): 3, (0, 3): 4, (1, 3): 5, (0, 4): 11, (1, 4): 10, (0, 5): 12, (1, 2): 6, (2, 2): 7, (2, 4): 9, (1, 5): 11, (2, 0): 11, (3, 0): 10, (2, 1): 8, (3, 1): 9, (3, 2): 8, (2, 3): 8, (3, 4): 10, (2, 5): 10, (3, 5): 11, (3, 3): 9}, (0, 1): {(0, 1): 0, (0, 0): 1, (0, 2): 1, (1, 0): 2, (0, 3): 2, (1, 3): 3, (0, 4): 9, (1, 4): 8, (0, 5): 10, (1, 1): 3, (1, 2): 4, (2, 2): 5, (2, 4): 7, (1, 5): 9, (2, 0): 9, (3, 0): 8, (2, 1): 6, (3, 1): 7, (3, 2): 6, (2, 3): 6, (3, 4): 8, (2, 5): 8, (3, 5): 9, (3, 3): 7}, (0, 2): {(0, 2): 0, (0, 1): 1, (0, 3): 1, (0, 0): 2, (1, 0): 3, (1, 3): 2, (0, 4): 8, (1, 4): 7, (0, 5): 9, (1, 1): 4, (1, 2): 3, (2, 2): 4, (2, 4): 6, (1, 5): 8, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 5, (3, 4): 7, (2, 5): 7, (3, 5): 8, (3, 3): 6}, (0, 3): {(0, 3): 0, (0, 2): 1, (1, 3): 1, (0, 0): 3, (1, 0): 4, (0, 1): 2, (0, 4): 7, (1, 4): 6, (0, 5): 8, (1, 1): 5, (1, 2): 2, (2, 2): 3, (2, 4): 5, (1, 5): 7, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 4, (3, 4): 6, (2, 5): 6, (3, 5): 7, (3, 3): 5}, (1, 3): {(1, 3): 0, (0, 3): 1, (1, 2): 1, (0, 0): 4, (1, 0): 5, (0, 1): 3, (0, 2): 2, (0, 4): 6, (1, 4): 5, (0, 5): 7, (1, 1): 6, (2, 2): 2, (2, 4): 4, (1, 5): 6, (2, 0): 6, (3, 0): 5, (2, 1): 3, (3, 1): 4, (3, 2): 3, (2, 3): 3, (3, 4): 5, (2, 5): 5, (3, 5): 6, (3, 3): 4}, (0, 4): {(0, 4): 0, (1, 4): 1, (0, 5): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 2, (1, 5): 2, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 3, (3, 4): 3, (2, 5): 3, (3, 5): 4, (3, 3): 6}, (1, 4): {(1, 4): 0, (0, 4): 1, (2, 4): 1, (1, 5): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 5): 2, (1, 1): 11, (1, 2): 4, (2, 2): 3, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 2, (3, 4): 2, (2, 5): 2, (3, 5): 3, (3, 3): 5}, (0, 5): {(0, 5): 0, (0, 4): 1, (0, 0): 11, (1, 0): 12, (0, 1): 10, (0, 2): 9, (0, 3): 8, (1, 3): 7, (1, 4): 2, (1, 1): 13, (1, 2): 6, (2, 2): 5, (2, 4): 3, (1, 5): 3, (2, 0): 9, (3, 0): 8, (2, 1): 6, (3, 1): 7, (3, 2): 6, (2, 3): 4, (3, 4): 4, (2, 5): 4, (3, 5): 5, (3, 3): 7}, (1, 1): {(1, 1): 0, (1, 0): 1, (0, 0): 2, (0, 1): 3, (0, 2): 4, (0, 3): 5, (1, 3): 6, (0, 4): 12, (1, 4): 11, (0, 5): 13, (1, 2): 7, (2, 2): 8, (2, 4): 10, (1, 5): 12, (2, 0): 12, (3, 0): 11, (2, 1): 9, (3, 1): 10, (3, 2): 9, (2, 3): 9, (3, 4): 11, (2, 5): 11, (3, 5): 12, (3, 3): 10}, (1, 2): {(1, 2): 0, (1, 3): 1, (2, 2): 1, (0, 0): 5, (1, 0): 6, (0, 1): 4, (0, 2): 3, (0, 3): 2, (0, 4): 5, (1, 4): 4, (0, 5): 6, (1, 1): 7, (2, 4): 3, (1, 5): 5, (2, 0): 5, (3, 0): 4, (2, 1): 2, (3, 1): 3, (3, 2): 2, (2, 3): 2, (3, 4): 4, (2, 5): 4, (3, 5): 5, (3, 3): 3}, (2, 2): {(2, 2): 0, (1, 2): 1, (2, 1): 1, (3, 2): 1, (2, 3): 1, (0, 0): 6, (1, 0): 7, (0, 1): 5, (0, 2): 4, (0, 3): 3, (1, 3): 2, (0, 4): 4, (1, 4): 3, (0, 5): 5, (1, 1): 8, (2, 4): 2, (1, 5): 4, (2, 0): 4, (3, 0): 3, (3, 1): 2, (3, 4): 3, (2, 5): 3, (3, 5): 4, (3, 3): 2}, (2, 4): {(2, 4): 0, (1, 4): 1, (2, 3): 1, (3, 4): 1, (2, 5): 1, (0, 0): 8, (1, 0): 9, (0, 1): 7, (0, 2): 6, (0, 3): 5, (1, 3): 4, (0, 4): 2, (0, 5): 3, (1, 1): 10, (1, 2): 3, (2, 2): 2, (1, 5): 2, (2, 0): 6, (3, 0): 5, (2, 1): 3, (3, 1): 4, (3, 2): 3, (3, 5): 2, (3, 3): 4}, (1, 5): {(1, 5): 0, (1, 4): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (0, 4): 2, (0, 5): 3, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 2, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 3, (3, 4): 3, (2, 5): 3, (3, 5): 4, (3, 3): 6}, (2, 0): {(2, 0): 0, (3, 0): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (0, 4): 8, (1, 4): 7, (0, 5): 9, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 6, (1, 5): 8, (2, 1): 3, (3, 1): 2, (3, 2): 5, (2, 3): 5, (3, 4): 7, (2, 5): 7, (3, 5): 8, (3, 3): 6}, (3, 0): {(3, 0): 0, (2, 0): 1, (3, 1): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 4): 7, (1, 4): 6, (0, 5): 8, (1, 1): 11, (1, 2): 4, (2, 2): 3, (2, 4): 5, (1, 5): 7, (2, 1): 2, (3, 2): 4, (2, 3): 4, (3, 4): 6, (2, 5): 6, (3, 5): 7, (3, 3): 5}, (2, 1): {(2, 1): 0, (2, 2): 1, (3, 1): 1, (0, 0): 7, (1, 0): 8, (0, 1): 6, (0, 2): 5, (0, 3): 4, (1, 3): 3, (0, 4): 5, (1, 4): 4, (0, 5): 6, (1, 1): 9, (1, 2): 2, (2, 4): 3, (1, 5): 5, (2, 0): 3, (3, 0): 2, (3, 2): 2, (2, 3): 2, (3, 4): 4, (2, 5): 4, (3, 5): 5, (3, 3): 3}, (3, 1): {(3, 1): 0, (3, 0): 1, (2, 1): 1, (0, 0): 8, (1, 0): 9, (0, 1): 7, (0, 2): 6, (0, 3): 5, (1, 3): 4, (0, 4): 6, (1, 4): 5, (0, 5): 7, (1, 1): 10, (1, 2): 3, (2, 2): 2, (2, 4): 4, (1, 5): 6, (2, 0): 2, (3, 2): 3, (2, 3): 3, (3, 4): 5, (2, 5): 5, (3, 5): 6, (3, 3): 4}, (3, 2): {(3, 2): 0, (2, 2): 1, (3, 3): 1, (0, 0): 7, (1, 0): 8, (0, 1): 6, (0, 2): 5, (0, 3): 4, (1, 3): 3, (0, 4): 5, (1, 4): 4, (0, 5): 6, (1, 1): 9, (1, 2): 2, (2, 4): 3, (1, 5): 5, (2, 0): 5, (3, 0): 4, (2, 1): 2, (3, 1): 3, (2, 3): 2, (3, 4): 4, (2, 5): 4, (3, 5): 5}, (2, 3): {(2, 3): 0, (2, 2): 1, (2, 4): 1, (0, 0): 7, (1, 0): 8, (0, 1): 6, (0, 2): 5, (0, 3): 4, (1, 3): 3, (0, 4): 3, (1, 4): 2, (0, 5): 4, (1, 1): 9, (1, 2): 2, (1, 5): 3, (2, 0): 5, (3, 0): 4, (2, 1): 2, (3, 1): 3, (3, 2): 2, (3, 4): 2, (2, 5): 2, (3, 5): 3, (3, 3): 3}, (3, 4): {(3, 4): 0, (2, 4): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 4): 3, (1, 4): 2, (0, 5): 4, (1, 1): 11, (1, 2): 4, (2, 2): 3, (1, 5): 3, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 2, (2, 5): 2, (3, 5): 3, (3, 3): 5}, (2, 5): {(2, 5): 0, (2, 4): 1, (3, 5): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 4): 3, (1, 4): 2, (0, 5): 4, (1, 1): 11, (1, 2): 4, (2, 2): 3, (1, 5): 3, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 2, (3, 4): 2, (3, 3): 5}, (3, 5): {(3, 5): 0, (2, 5): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (0, 4): 4, (1, 4): 3, (0, 5): 5, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 2, (1, 5): 4, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 3, (3, 4): 3, (3, 3): 6}, (3, 3): {(3, 3): 0, (3, 2): 1, (0, 0): 8, (1, 0): 9, (0, 1): 7, (0, 2): 6, (0, 3): 5, (1, 3): 4, (0, 4): 6, (1, 4): 5, (0, 5): 7, (1, 1): 10, (1, 2): 3, (2, 2): 2, (2, 4): 4, (1, 5): 6, (2, 0): 6, (3, 0): 5, (2, 1): 3, (3, 1): 4, (2, 3): 3, (3, 4): 5, (2, 5): 5, (3, 5): 6}}
</code></pre>
<p>Using pandas I converted the dictionary to a dataframe:</p>
<pre><code>df = pd.DataFrame(Y)
df.index = [*df.index]
df.columns = [*df.columns]
arraydf = df.to_numpy()
</code></pre>
<p>This is the dataframe I get:</p>
<pre><code> (0, 0) (1, 0) (0, 1) (0, 2) ... (3, 4) (2, 5) (3, 5) (3, 3)
(0, 0) 0 1 1 2 ... 9 9 10 8
(1, 0) 1 0 2 3 ... 10 10 11 9
(0, 1) 1 2 0 1 ... 8 8 9 7
(0, 2) 2 3 1 0 ... 7 7 8 6
(0, 3) 3 4 2 1 ... 6 6 7 5
(1, 3) 4 5 3 2 ... 5 5 6 4
(0, 4) 10 11 9 8 ... 3 3 4 6
(1, 4) 9 10 8 7 ... 2 2 3 5
(0, 5) 11 12 10 9 ... 4 4 5 7
(1, 1) 2 1 3 4 ... 11 11 12 10
(1, 2) 5 6 4 3 ... 4 4 5 3
(2, 2) 6 7 5 4 ... 3 3 4 2
(2, 4) 8 9 7 6 ... 1 1 2 4
(1, 5) 10 11 9 8 ... 3 3 4 6
(2, 0) 10 11 9 8 ... 7 7 8 6
(3, 0) 9 10 8 7 ... 6 6 7 5
(2, 1) 7 8 6 5 ... 4 4 5 3
(3, 1) 8 9 7 6 ... 5 5 6 4
(3, 2) 7 8 6 5 ... 4 4 5 1
(2, 3) 7 8 6 5 ... 2 2 3 3
(3, 4) 9 10 8 7 ... 0 2 3 5
(2, 5) 9 10 8 7 ... 2 0 1 5
(3, 5) 10 11 9 8 ... 3 1 0 6
(3, 3) 8 9 7 6 ... 5 5 6 0
</code></pre>
<p>Then, I convert the df to an array:</p>
<pre><code>arraydf = df.to_numpy()
</code></pre>
<p>This is my output now:</p>
<pre><code>[ 0 1 1 2 3 4 10 9 11 2 5 6 8 10 10 9 7 8 7 7 9 9 10 8]
[ 1 0 2 3 4 5 11 10 12 1 6 7 9 11 11 10 8 9 8 8 10 10 11 9]
[ 1 2 0 1 2 3 9 8 10 3 4 5 7 9 9 8 6 7 6 6 8 8 9 7]
[2 3 1 0 1 2 8 7 9 4 3 4 6 8 8 7 5 6 5 5 7 7 8 6]
[3 4 2 1 0 1 7 6 8 5 2 3 5 7 7 6 4 5 4 4 6 6 7 5]
[4 5 3 2 1 0 6 5 7 6 1 2 4 6 6 5 3 4 3 3 5 5 6 4]
[10 11 9 8 7 6 0 1 1 12 5 4 2 2 8 7 5 6 5 3 3 3 4 6]
[ 9 10 8 7 6 5 1 0 2 11 4 3 1 1 7 6 4 5 4 2 2 2 3 5]
[11 12 10 9 8 7 1 2 0 13 6 5 3 3 9 8 6 7 6 4 4 4 5 7]
[ 2 1 3 4 5 6 12 11 13 0 7 8 10 12 12 11 9 10 9 9 11 11 12 10]
[5 6 4 3 2 1 5 4 6 7 0 1 3 5 5 4 2 3 2 2 4 4 5 3]
[6 7 5 4 3 2 4 3 5 8 1 0 2 4 4 3 1 2 1 1 3 3 4 2]
[ 8 9 7 6 5 4 2 1 3 10 3 2 0 2 6 5 3 4 3 1 1 1 2 4]
[10 11 9 8 7 6 2 1 3 12 5 4 2 0 8 7 5 6 5 3 3 3 4 6]
[10 11 9 8 7 6 8 7 9 12 5 4 6 8 0 1 3 2 5 5 7 7 8 6]
[ 9 10 8 7 6 5 7 6 8 11 4 3 5 7 1 0 2 1 4 4 6 6 7 5]
[7 8 6 5 4 3 5 4 6 9 2 1 3 5 3 2 0 1 2 2 4 4 5 3]
[ 8 9 7 6 5 4 6 5 7 10 3 2 4 6 2 1 1 0 3 3 5 5 6 4]
[7 8 6 5 4 3 5 4 6 9 2 1 3 5 5 4 2 3 0 2 4 4 5 1]
[7 8 6 5 4 3 3 2 4 9 2 1 1 3 5 4 2 3 2 0 2 2 3 3]
[ 9 10 8 7 6 5 3 2 4 11 4 3 1 3 7 6 4 5 4 2 0 2 3 5]
[ 9 10 8 7 6 5 3 2 4 11 4 3 1 3 7 6 4 5 4 2 2 0 1 5]
[10 11 9 8 7 6 4 3 5 12 5 4 2 4 8 7 5 6 5 3 3 1 0 6]
[ 8 9 7 6 5 4 6 5 7 10 3 2 4 6 6 5 3 4 1 3 5 5 6 0]
</code></pre>
<p>My question is: <strong>How can I get the final array to seem a matrix?</strong> I want all the lines of the same lenghts and to be in the right order (have "nice" readable columns also)</p>
<p>EDIT:
asked infos:</p>
<pre><code>arraydf.shape
(24, 24)
arraydf.dtype
int64
df.dtypes
(0, 0) int64
(0, 1) int64
(0, 2) int64
(1, 2) int64
(0, 3) int64
(0, 4) int64
(1, 4) int64
(0, 5) int64
(1, 5) int64
(1, 0) int64
(2, 0) int64
(1, 1) int64
(1, 3) int64
(2, 3) int64
(3, 0) int64
(2, 1) int64
(2, 2) int64
(2, 4) int64
(2, 5) int64
(3, 5) int64
(3, 1) int64
(3, 2) int64
(3, 3) int64
(3, 4) int64
dtype: object
df.info
<bound method DataFrame.info of (0, 0) (0, 1) (0, 2) (1, 2) ... (3, 1) (3, 2) (3, 3) (3, 4)
(0, 0) 0 1 2 3 ... 8 9 10 11
(0, 1) 1 0 1 2 ... 7 8 9 10
(0, 2) 2 1 0 1 ... 6 7 8 9
(1, 2) 3 2 1 0 ... 5 6 7 8
(0, 3) 3 2 1 2 ... 7 8 9 10
(0, 4) 4 3 2 3 ... 8 9 10 11
(1, 4) 5 4 3 4 ... 9 10 11 12
(0, 5) 5 4 3 4 ... 9 10 11 12
(1, 5) 6 5 4 5 ... 10 11 12 13
(1, 0) 5 4 3 2 ... 3 4 5 6
(2, 0) 6 5 4 3 ... 2 3 4 5
(1, 1) 4 3 2 1 ... 4 5 6 7
(1, 3) 4 3 2 1 ... 6 7 8 9
(2, 3) 5 4 3 2 ... 7 8 9 10
(3, 0) 7 6 5 4 ... 1 2 3 4
(2, 1) 7 6 5 4 ... 3 4 5 6
(2, 2) 8 7 6 5 ... 4 5 6 7
(2, 4) 14 13 12 11 ... 6 5 4 3
(2, 5) 13 12 11 10 ... 5 4 3 2
(3, 5) 12 11 10 9 ... 4 3 2 1
(3, 1) 8 7 6 5 ... 0 1 2 3
(3, 2) 9 8 7 6 ... 1 0 1 2
(3, 3) 10 9 8 7 ... 2 1 0 1
(3, 4) 11 10 9 8 ... 3 2 1 0
</code></pre> | <p>If you want to print line-by-line and still have things aligned you can do the following:</p>
<pre><code>>>> for l in str(df.to_numpy()).split("\n"):
... print(l)
...
[[ 0 1 1 2 3 4 10 9 11 2 5 6 8 10 10 9 7 8 7 7 9 9 10 8]
[ 1 2 0 1 2 3 9 8 10 3 4 5 7 9 9 8 6 7 6 6 8 8 9 7]
[ 2 3 1 0 1 2 8 7 9 4 3 4 6 8 8 7 5 6 5 5 7 7 8 6]
[ 3 4 2 1 0 1 7 6 8 5 2 3 5 7 7 6 4 5 4 4 6 6 7 5]
...
</code></pre> | python|arrays|pandas|numpy|output | 2 |
378,210 | 62,446,010 | Keras Creating CNN Model "The added layer must be an instance of class Layer" | <pre><code>from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, Input, Dense
def create_model():
def add_conv_block(model, num_filters):
model.add(Conv2D(num_filters, 3, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(num_filters, 3, activation='relu', padding='valid'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
return model
model = tf.keras.models.Sequential()
model.add(Input(shape=(32, 32, 3)))
model = add_conv_block(model, 32)
model = add_conv_block(model, 64)
model = add_conv_block(model, 128)
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_model()
model.summary()
</code></pre>
<p><a href="https://i.stack.imgur.com/yNmSz.png" rel="nofollow noreferrer">enter image description here</a></p> | <p>The solution is to use <code>InputLayer</code> instead of <code>Input</code>. <code>InputLayer</code> is meant to be used with <code>Sequential</code> models. You can also omit the <code>InputLayer</code> entirely and specify <code>input_shape</code> in the first layer of the sequential model.</p>
<p><code>Input</code> is meant to be used with the TensorFlow Keras functional API, not the sequential API.</p>
<pre><code>from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, InputLayer, Dense
def create_model():
def add_conv_block(model, num_filters):
model.add(Conv2D(num_filters, 3, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(num_filters, 3, activation='relu', padding='valid'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
return model
model = tf.keras.models.Sequential()
model.add(InputLayer((32, 32, 3)))
model = add_conv_block(model, 32)
model = add_conv_block(model, 64)
model = add_conv_block(model, 128)
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_model()
model.summary()
</code></pre> | python|tensorflow|keras|conv-neural-network | 2 |
378,211 | 51,293,345 | Unique data for each day using Python/Pandas Dataframe | <p>I'm trying to process each day's data using pandas. Below is my code, data and current output. However, function getUniqueDates() has to traverse full df to get the unique dates in the list as shown below. Is there any simple and efficient way to get each day's data which can be passed to function processDataForEachDate() . Traversing big list is time consuming.I have stripped down columns in this example to keep it simple.</p>
<pre><code> data = {'date': ['2014-05-01 18:47:05.069722', '2014-05-01 18:47:05.119994', '2014-05-02 18:47:05.178768', '2014-05-02 18:47:05.230071', '2014-05-02 18:47:05.230071', '2014-05-02 18:47:05.280592', '2014-05-03 18:47:05.332662', '2014-05-03 18:47:05.385109', '2014-05-04 18:47:05.436523', '2014-05-04 18:47:05.486877'],
'noOfJobs': [34, 25, 26, 15, 15, 14, 26, 25, 62, 41]}
df = pd.DataFrame(data, columns = ['date', 'noOfJobs'])
df = df.astype(dtype= {"date":'datetime64[ns]'})
print(df)
#Ouput====================================
date noOfJobs
0 2014-05-01 18:47:05.069722 34
1 2014-05-01 18:47:05.119994 25
2 2014-05-02 18:47:05.178768 26
3 2014-05-02 18:47:05.230071 15
4 2014-05-02 18:47:05.230071 15
5 2014-05-02 18:47:05.280592 14
6 2014-05-03 18:47:05.332662 26
7 2014-05-03 18:47:05.385109 25
8 2014-05-04 18:47:05.436523 62
9 2014-05-04 18:47:05.486877 41
def getUniqueDates():
todaysDate = datetime.datetime.today().strftime('%Y-%m-%d')
listOfDates=[]
for c,r in df.iterrows():
if r.date.date() != todaysDate:
todaysDate=r.date.date()
listOfDates.append(todaysDate)
return listOfDates
listOfDates = getUniqueDates()
print(listOfDates)
# Output====================================
[datetime.date(2014, 5, 1),
datetime.date(2014, 5, 2),
datetime.date(2014, 5, 3),
datetime.date(2014, 5, 4)]
for eachDate in listOfDates:
processDataForEachDate(eachDate)
</code></pre> | <p>You can access a NumPy array of unique dates with:</p>
<pre><code>>>> df.date.dt.date.unique()
array([datetime.date(2014, 5, 1), datetime.date(2014, 5, 2),
datetime.date(2014, 5, 3), datetime.date(2014, 5, 4)], dtype=object)
</code></pre>
<p><code>dt</code> is an <em>accessor method</em> of the pandas Series <code>df.date</code>. Basically, it's a class that acts as a property-like interface to a bunch of date-time-related methods. The benefit is that it is vectorized (see <a href="https://stackoverflow.com/a/24871316/7954504">here</a> for a comparison to <code>.iterrows()</code> from a Pandas developer), and that accessor methods also use a "cached property" design:</p>
<ul>
<li><a href="https://github.com/pandas-dev/pandas/blob/5d0daa0522730a6f999ceaf328f63f03dd62d0b4/pandas/core/accessor.py#L113" rel="nofollow noreferrer">Link to source</a></li>
<li><a href="https://www.pydanny.com/cached-property.html" rel="nofollow noreferrer">Link to an explanation</a></li>
</ul> | python|pandas|dataframe | 1 |
378,212 | 51,195,017 | tf.keras.backend way of replacing a tensors value if it's less than 1 | <p>I am using Keras with the Tensorflow backend.</p>
<p>In my loss function I have a tensor where I need to replace the elements that are less than 1 with a 1.</p>
<p>I can see loads of functions available to me in the docs
<a href="https://www.tensorflow.org/api_docs/python/tf/keras/backend" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/backend</a></p>
<p>but I'm not sure how to go about this.</p>
<p>If I do:</p>
<pre><code>a_ = tf.Print(
message='a_shape',
input_=a_,
data=[tf.shape(a_)]
)
</code></pre>
<p>I get the shape as:</p>
<pre><code>y_shape[128]
</code></pre>
<p>I need to essentially iterate through this tensor replacing elements that are less than 1 with a 1.</p>
<p>How would I do this using the keras tensorflow API?</p>
<p>Thanks -</p> | <p>if <code>a</code> is your tensor you can do the following:</p>
<p><code>b = a*tf.cast(a>1, 'float32') + tf.cast(a<=1, 'float32')</code></p> | python|tensorflow|machine-learning|keras|tensor | 1 |
378,213 | 51,358,307 | compare the next row value and change the current row value using pandas python | <p>any way of comparing a row value with the next row value and change the current row value using pandas?</p>
<p>Basically in the the first Data frame DF1, in the value column, one of the value is '999', so the values of the next rows for that 'user-id' is less than the value '999'. so in this case i want to add '1000' which is 10^(len(999)) to the all successive values of that 'user-id'.</p>
<p>I tried using shift, but I found that it skips one of the row value by giving a 'Null'. And I am also not sure how to do it without creating a new value. </p>
<p>For example,
if this is the data set I have, DF1</p>
<pre><code>user-id serial-number value day
1 2 10 1
1 2 20 2
1 2 30 3
1 2 40 4
1 2 50 5
1 2 60 6
1 2 70 7
1 2 80 8
1 2 90 9
1 2 100 10
1 2 999 11
1 2 300 12
1 2 400 13
2 3 11 1
2 3 12 2
2 3 13 3
2 3 14 4
2 3 99 5
2 3 16 6
2 3 17 7
2 3 18 8
</code></pre>
<p>I need the resultant data frame to be DF1:</p>
<pre><code>user-id serial-number value day
1 2 10 1
1 2 20 1
1 2 30 1
1 2 40 1
1 2 50 1
1 2 60 1
1 2 70 1
1 2 80 1
1 2 90 1
1 2 100 1
1 2 999 1
1 2 1300 1
1 2 1400 1
. .
2 3 11 1
2 3 12 1
2 3 13 1
2 3 14 1
2 3 99 1
2 3 116 1
2 3 117 1
2 3 118 1
</code></pre>
<p>I think I've explained the question properly.</p>
<p>similarly i want to do it for all the values in the "value" column for each user ID.</p>
<p>Any suggestions?</p> | <p>I have 2 methods for this:</p>
<p>This method we multiply by the max value of each user-id - it works on the sample dataset you porivded but it might not work overal.</p>
<pre><code>df.set_index('user-id', inplace=True)
df['value'] += df.groupby('user-id')['value'].apply(
lambda x:(x.shift() > x).astype(int).cumsum()
) * 10**df.groupby('user-id')['value'].max().apply(lambda x: len(str(x)))
</code></pre>
<p>The other on is looping through each item:</p>
<pre><code>def foo(x):
for i in range(1,len(x)):
if x.iloc[i] < x.iloc[i-1]:
x.iloc[i:] = x.iloc[i:] + 10**(len(str(x.iloc[i-1])))
return x
df['value'] = df.groupby('user-id')['value'].apply(foo)
</code></pre> | python|pandas|pandas-groupby | 0 |
378,214 | 51,495,927 | How to visualize a matrix of categories as an RGB image? | <p>I am using neural network to do semantic segmentation(human parsing), something like taking a photo of people as input and the neural network tells that every pixel is most likely to be head, leg, background or some other parts of human. The algorithm runs smoothly and giving a <code>numpy.ndarray</code> as output . The shape of the array is <code>(1,23,600,400)</code>, where 600*400 is the resolution of the input image and 23 is the number of categories. The 3d matrix looks like a 23-layer stacked 2d matrices, where each layer using a matrix of float to tell the possibility that each pixel is of that category.</p>
<p>To visualize the matrix like the following figure, I used <code>numpy.argmax</code> to squash the 3d matrix into a 2d matrix that holds the index of the most possible category. But I don't have any idea how to proceed to get the visualization I want.</p>
<p><a href="https://i.stack.imgur.com/tgDp2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tgDp2.png" alt="The desired visualization effect"></a></p>
<h1>EDIT</h1>
<p>Actually, I can do it in a trivial way. That is, use a for loop to traverse through every pixel and assign a color to it to get a image. However, this is not a vectorized coding, since numpy has built-in way to speed up matrix manipulation. And I need to save CPU cycles for real time segmentation.</p> | <p>It's fairly easy. All you need to have is a <a href="https://en.wikipedia.org/wiki/Lookup_table" rel="nofollow noreferrer">lookup table</a> mapping the 23 labels into unique colors. The easiest way is to have a 23-by-3 numpy array with each row storing the RGB values for the corresponding label:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
lut = np.random.rand(23, 3) # using random mapping - but you can do better
lb = np.argmax(prediction, axis=1) # converting probabilities to discrete labels
rgb = lut[lb[0, ...], :] # this is all it takes to do the mapping.
plt.imshow(rgb)
plt.show()
</code></pre>
<p>Alternatively, if you are only interested in the colormap for display purposes, you can use <code>cmap</code> argument of <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.imshow.html" rel="nofollow noreferrer"><code>plt.imshow</code></a>, but this will requires you to transform <code>lut</code> into a "colormap":</p>
<pre><code>from matplotlib.colors import LinearSegmentedColormap
cmap = LinearSegmentedColormap.from_list('new_map', lut, N=23)
plt.imshow(lb[0, ...], cmap=cmap)
plt.show()
</code></pre> | python|numpy|visualization|image-segmentation|semantic-segmentation | 2 |
378,215 | 51,163,941 | In Pandas, how to make a PivotTable for counting and skip replicates? | <p>In Python3 and pandas I have a dataframe like this:</p>
<pre><code>IdComissao SiglaComissao NomeMembro
12444 CCJR Abelardo Camarinha
12444 CCJR Abelardo Camarinha
12448 CAD Abelardo Camarinha
12448 CAD Abelardo Camarinha
12453 CMADS Abelardo Camarinha
12453 CMADS Abelardo Camarinha
12453 CMADS Abelardo Camarinha
13297 CPI-InvTer Abelardo Camarinha
8509 CFC Abelardo Camarinha
8509 CFC Abelardo Camarinha
13149 CPIATFC Abelardo Camarinha
12444 CCJR Vaz de Lima
12445 CFOP Vaz de Lima
12445 CFOP Vaz de Lima
12445 CFOP Vaz de Lima
12454 CAE Vaz de Lima
12455 CDD Vaz de Lima
8501 CCJ Vaz de Lima
8503 CAP Vaz de Lima
8509 CFC Vaz de Lima
8509 CFC Vaz de Lima
8511 CEP Vaz de Lima
8515 CFO Vaz de Lima
8515 CFO Vaz de Lima
8515 CFO Vaz de Lima
8515 CFO Vaz de Lima
8515 CFO Vaz de Lima
8519 CSOP Vaz de Lima
8521 CEDP Vaz de Lima
</code></pre>
<p>I am looking for a way to count how many times each name "NomeMembro" has an item "SiglaComissao", without repeats</p>
<p>For example, the name "Abelardo Camarinha" has six types of "SiglaComissao" and the name "Vaz de Lima" has 11 types</p>
<p>Please, is there a way to make a PivotTable to count items without repeats?</p> | <p>I think you're looking for <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nunique.html" rel="nofollow noreferrer"><code>nunique</code></a>:</p>
<pre><code>df.groupby('NomeMembro')['SiglaComissao'].nunique()
</code></pre>
<p>Which returns:</p>
<pre><code>NomeMembro
Abelardo Camarinha 6
Vaz de Lima 11
</code></pre> | python|pandas|pivot-table | 2 |
378,216 | 51,236,215 | How to reset GPU on keras / tensorflow hang? | <p>Sometimes I have to kill my python application which use GPU with Keras or Tensorflow and after that I can't run them anymore. This is probably because GPU is still used by something. </p>
<p>How to free GPU by force without machine reboot?</p>
<hr>
<p>I tried the following shell script</p>
<pre><code>$ cat ~/bin/nvidia-reset
#!/bin/sh
sudo rmmod nvidia_uvm
sudo rmmod nvidia_drm
sudo rmmod nvidia_modeset
sudo rmmod nvidia
sudo nvidia-smi
</code></pre>
<p>But often it is unable to do the job saying <code>nvidia_uvm</code> is busy.</p> | <p>try this: </p>
<pre><code>keras.backend.clear_session()
</code></pre> | python|tensorflow|keras|gpu | 0 |
378,217 | 51,250,413 | Python List append with respective index | <p>I need help on list append. I have to export it into CSV with the respective list index.</p>
<pre><code>lst1 = ['a', 'b', 'c']
lst2 = ['w', 'f', 'g']
lst3 = ['e', 'r', 't']
ap = []
ap.append((lst1, lst2, lst3))
output: [(['a', 'b', 'c'], ['w', 'f', 'g'], ['e', 'r', 't'])]
</code></pre>
<p>Expected output:</p>
<pre><code>[('a', 'w', 'e')
('b', 'f', 'r')
('c', 'g', 't')]
</code></pre>
<p>I need to export to Excel via Pandas, please help.</p>
<pre><code> col1 col2 col3
a w e
b f r
c g t
</code></pre> | <p>You need a list of tuples, not a list of a tuple of lists. For your result, you can use <code>zip</code> with unpacking to extract items in an iterable of lists by index.</p>
<pre><code>df = pd.DataFrame(list(zip(*(lst1, lst2, lst3))),
columns=['col1', 'col2', 'col3'])
print(df)
col1 col2 col3
0 a w e
1 b f r
2 c g t
</code></pre>
<p>Then export to Excel as you normally would:</p>
<pre><code>df.to_excel('file.xlsx', index=False)
</code></pre> | python|python-3.x|pandas|dataframe | 3 |
378,218 | 51,135,928 | Perform operations on last iteration values using iterrows | <p>I have two datasets.</p>
<p>df</p>
<pre><code>Name Date Quantity
ZMTD 2018-06-30 1000
ZMTD 2018-05-31 975
ZMTD 2018-04-30 920
ZMTD 2018-03-30 900
ZMTD 2018-02-28 840
ZMTD 2018-01-31 820
ZMTD 2017-12-30 760
ZMTD 2017-11-31 600
ZMTD 2017-10-30 1200
ZMTD 2017-09-31 1170
ZMTD 2017-08-30 1090
ZMTD 2017-07-30 1100
</code></pre>
<p>df2 </p>
<pre><code>Name Date Factor
KOC 2018-01-15 0.5
ZMTD 2017-11-10 1.5
ZMTD 2018-03-20 2.5
BND 2016-03-20 25
</code></pre>
<p>I am trying to divide the the column 'Quantity' in df with the column 'Factor' in df2 on all rows that satisfy the condition df['Date'] < df2['Date'].</p>
<p>I wrote the following code</p>
<pre><code>name = df['Name'].iloc[0]
for i, row in df2.iterrows():
if row[0] == name:
factor_date = row[1]
ratio = row[2]
for j, rows in df.iterrows():
new_quantity = rows[2]
if (rows[1] < factor_date):
new_quantity = (new_quantity / ratio)
df.at[i, 'Quantity'] = new_quantity
</code></pre>
<p>when I run this code, I expect the following values </p>
<pre><code>Name Date Quantity
ZMTD 2018-06-30 1000
ZMTD 2018-05-31 975
ZMTD 2018-04-30 920
ZMTD 2018-03-30 900
ZMTD 2018-02-28 336
ZMTD 2018-01-31 328
ZMTD 2017-12-30 304
ZMTD 2017-11-31 240
ZMTD 2017-10-30 320
ZMTD 2017-09-31 312
ZMTD 2017-08-30 290.66
ZMTD 2017-07-30 293.34
</code></pre>
<p>But I get the values where the Quantity column is divided by the latest Factor column value 2.5 but not on the values which are initially divided by 1.5</p>
<p>I was wondering if we can save the values of the initial iteration and then run the new iteration on the previous values using iterrows.</p> | <p>This will give you what you seek:</p>
<pre><code>df = df1.merge(df2, on='Name', how='left', suffixes=('', '2'))
df['Factor'] = ((df['Date'] < df['Date2']).astype(int) * df['Factor']).replace(0, 1)
df = df.groupby(['Name', 'Date']).agg({'Quantity': 'max', 'Factor': 'prod'}).reset_index()
df['Quantity'] = df['Quantity'] / df['Factor']
df[['Name', 'Date', 'Quantity']].sort_values(['Name', 'Date'], ascending=False).reset_index(drop=True)
# Name Date Quantity
#0 ZMTD 2018-06-30 1000.000000
#1 ZMTD 2018-05-31 975.000000
#2 ZMTD 2018-04-30 920.000000
#3 ZMTD 2018-03-30 900.000000
#4 ZMTD 2018-02-28 336.000000
#5 ZMTD 2018-01-31 328.000000
#6 ZMTD 2017-12-30 304.000000
#7 ZMTD 2017-11-31 240.000000
#8 ZMTD 2017-10-30 320.000000
#9 ZMTD 2017-09-31 312.000000
#10 ZMTD 2017-08-30 290.666667
#11 ZMTD 2017-07-30 293.333333
</code></pre> | python|pandas|loops|iteration | 2 |
378,219 | 51,546,293 | Seaborn and Pandas: Make multiple x-category bar plot using multi index data in python | <p>I have a multi-index dataframe that I've melted to look something like this:</p>
<pre><code>Color Frequency variable value
Red 2-3 times a month x 22
Red A few days a week x 45
Red At least once a day x 344
Red Never x 5
Red Once a month x 1
Red Once a week x 0
Red Once every few months x 4
Blue 2-3 times a month x 4
Blue A few days a week x 49
Blue At least once a day x 200
Blue Never x 7
Blue Once a month x 19
Blue Once a week x 10
Blue Once every few months x 5
Red 2-3 times a month y 3
Red A few days a week y 97
Red At least once a day y 144
Red Never y 4
Red Once a month y 0
Red Once a week y 0
Red Once every few months y 4
Blue 2-3 times a month y 44
Blue A few days a week y 62
Blue At least once a day y 300
Blue Never y 2
Blue Once a month y 4
Blue Once a week y 23
Blue Once every few months y 6
Red 2-3 times a month z 4
Red A few days a week z 12
Red At least once a day z 101
Red Never z 0
Red Once a month z 0
Red Once a week z 10
Red Once every few months z 0
Blue 2-3 times a month z 100
Blue A few days a week z 203
Blue At least once a day z 299
Blue Never z 0
Blue Once a month z 0
Blue Once a week z 204
Blue Once every few months z 100
</code></pre>
<p>I'm trying to make a seaborn plot where there are two categories for the x-axis <code>variable</code> and <code>Frequency</code> and the hue is based on <code>Color</code>. Moreover, I want the y-axis to be the proportion of <code>value</code> over the sum of the values for that <code>variable</code> for each <code>Color</code>; e.g. the y-value for variable "x.2-3 times a month" should be 22/(22+45+344+5+1+0+4) or 5.22%.</p>
<p>So far I have this:</p>
<pre><code>import seaborn as sns
fig, ax1 = plt.subplots(figsize=(20, 10))
sns.factorplot(x='variable',y='value', hue='Frequency', data=df, kind='bar', ax=ax1)
</code></pre>
<p>This is part of the way there. How do I also groupby 1) Color and 2) take the <em>proportion</em> of values for each <code>variable</code> & <code>Frequency</code>, rather than the count?</p> | <p>This is what you need to find the portion of each number for that group:</p>
<pre><code>df['proportion'] = df['value'] / df.groupby(['Color','variable'])['value'].transform('sum')
</code></pre>
<p>Output:</p>
<pre><code> variable Frequency Color value portion
0 x 2-3 times a month Red 22 0.052257
1 x A few days a week Red 45 0.106888
2 x At least once a day Red 344 0.817102
3 x Never Red 5 0.011876
4 x Once a month Red 1 0.002375
5 x Once a week Red 0 0.000000
6 x Once every few months Red 4 0.009501
7 x 2-3 times a month Blue 4 0.013605
8 x A few days a week Blue 49 0.166667
9 x At least once a day Blue 200 0.680272
10 x Never Blue 7 0.023810
11 x Once a month Blue 19 0.064626
12 x Once a week Blue 10 0.034014
13 x Once every few months Blue 5 0.017007
14 y 2-3 times a month Red 3 0.011905
15 y A few days a week Red 97 0.384921
16 y At least once a day Red 144 0.571429
17 y Never Red 4 0.015873
18 y Once a month Red 0 0.000000
19 y Once a week Red 0 0.000000
20 y Once every few months Red 4 0.015873
21 y 2-3 times a month Blue 44 0.099773
22 y A few days a week Blue 62 0.140590
23 y At least once a day Blue 300 0.680272
24 y Never Blue 2 0.004535
25 y Once a month Blue 4 0.009070
26 y Once a week Blue 23 0.052154
27 y Once every few months Blue 6 0.013605
28 z 2-3 times a month Red 4 0.031496
29 z A few days a week Red 12 0.094488
30 z At least once a day Red 101 0.795276
31 z Never Red 0 0.000000
32 z Once a month Red 0 0.000000
33 z Once a week Red 10 0.078740
34 z Once every few months Red 0 0.000000
35 z 2-3 times a month Blue 100 0.110375
36 z A few days a week Blue 203 0.224062
37 z At least once a day Blue 299 0.330022
38 z Never Blue 0 0.000000
39 z Once a month Blue 0 0.000000
40 z Once a week Blue 204 0.225166
41 z Once every few months Blue 100 0.110375
</code></pre> | python|pandas|dataframe|seaborn | 2 |
378,220 | 51,203,054 | Can't Import Tensor Flow in Anaconda 3.6 on Windows 10 | <p>I just installed CUDA 92 CUDANN and Tensor Flow on my Windows 10 laptop. </p>
<p>I am unable to import tensor flow in Python. I get a trace from Python that says: </p>
<blockquote>
<p>can't load a dll</p>
</blockquote>
<p>But it doesn't say which one it is. Here is a directory listing the trace I received. Can you help.</p>
<blockquote>
<p>PS C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin> python
Python 3.6.0 |Anaconda 4.3.0 (64-bit)| (default, Dec 23 2016,
11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help",
"copyright", "credits" or "license" for more information.
import tensorflow as tf
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 17, in swig_import_helper
return importlib.import_module(mname)
File "C:\Program Files\Anaconda3\lib\importlib__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: DLL load failed: The specified module could not be found.</p>
</blockquote>
<p>During handling of the above exception, another exception occurred:</p>
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in
File "C:\Program
Files\Anaconda3\lib\site-packages\tensorflow__init__.py", line 24, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Program
Files\Anaconda3\lib\site-packages\tensorflow\python__init__.py", line
49, in
from tensorflow.python import pywrap_tensorflow File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 74, in
raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Program
Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 58, in
from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Program
Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 18, in
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Program
Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 17, in swig_import_helper
return importlib.import_module(mname) File "C:\Program Files\Anaconda3\lib\importlib__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level) ImportError: DLL load failed: The specified module could not be found.</p>
<p>Failed to load the native TensorFlow runtime.</p>
<p>See
<a href="https://www.tensorflow.org/install/install_sources#common_installation_problems" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_sources#common_installation_problems</a></p>
<p>for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.</p>
</blockquote> | <p>Mostly in windows its caused by MSVCP140.dll missing if you install</p>
<p><a href="https://www.microsoft.com/en-us/download/details.aspx?id=53587" rel="nofollow noreferrer">Microsoft Visual C++</a></p>
<p>if that doesn't help, following dependencies are also there for tensotorflow:</p>
<p>KERNEL32.dll</p>
<p>WSOCK32.dll</p>
<p>WS2_32.dll</p>
<p>SHLWAPI.dll</p>
<p>python35.dll</p>
<p>MSVCP140.dll</p>
<p>VCRUNTIME140.dll</p>
<p>api-ms-win-crt-runtime-l1-1-0.dll</p>
<p>api-ms-win-crt-heap-l1-1-0.dll</p>
<p>api-ms-win-crt-utility-l1-1-0.dll</p>
<p>api-ms-win-crt-stdio-l1-1-0.dll</p>
<p>api-ms-win-crt-string-l1-1-0.dll</p>
<p>api-ms-win-crt-math-l1-1-0.dll</p>
<p>api-ms-win-crt-convert-l1-1-0.dll</p>
<p>api-ms-win-crt-environment-l1-1-0.dll</p>
<p>api-ms-win-crt-filesystem-l1-1-0.dll</p>
<p>api-ms-win-crt-time-l1-1-0.dll</p> | python|tensorflow | 0 |
378,221 | 51,217,584 | Semi-Interactive Pandas Dataframe in a GUI | <p>There are a number of excellent answers to this question <a href="https://stackoverflow.com/questions/10636024/python-pandas-gui-for-viewing-a-dataframe-or-matrix">GUIs for displaying dataframes</a>, but what I'm looking to do is a bit more advanced.</p>
<p>I'd like to display a dataframe, but have a couple of the columns be interactive where the user can manually overwrite values (and the rest be static). It would be useful to have "total" rows that change with the overwritten values and eventually have some interactive buttons around the dataframe for loading and clearing data.</p>
<p><a href="https://github.com/draperjames/qtpandas/issues" rel="nofollow noreferrer">QTPandas</a> looks promising, but appears to be dead as it is build off of a really old version of Pandas (0.17.1). Can this be done in QT? Is something else better?</p> | <p>I love Rstudio as my IDE as I can not only view all objects created but I can also edit data in the IDE itself. There are many other great features too.
And you can use R Studio for Python coding too (using reticulate package).</p>
<p>Spyder too gives this feature of viewing or editing the data frame.</p>
<p>However, if you're looking for a <strong>dedicated GUI with drag & drop features</strong>, you can use <strong>Pandas GUI</strong>.
Features of <a href="https://pypi.org/project/pandasgui/#history" rel="nofollow noreferrer">pandasgui</a> are:</p>
<ul>
<li>View DataFrames and Series (with MultiIndex support)</li>
<li>Interactive plotting</li>
<li>Filtering</li>
<li>Statistical summary</li>
<li>Data editing and copy / paste</li>
<li>Import CSV files with drag & drop Search toolbar</li>
</ul>
<p>It's first version was released in Mar 2019 & still developing. As of date, you can't use it in Colab</p> | python|pandas|user-interface|pyqt|interactive | 1 |
378,222 | 51,500,281 | how to get a hidden layer of tensorflow hub module | <p>I want to use tensorflow hub to generate features for my images, but it seems that the 2048 features of Inception Module are not enough for my problem because my class images are very similar. so I decided to use the features of a hidden layer of this module, for example: </p>
<blockquote>
<p>"module/InceptionV3/InceptionV3/Mixed_7c/concat:0"</p>
</blockquote>
<p>so how can I write a function that gives me this ?*8*8*2048 features from my input images? </p> | <p>Please try</p>
<pre><code>module = hub.Module(...) # As before.
outputs = module(dict(images=images),
signature="image_feature_vector",
as_dict=True)
print(outputs.items())
</code></pre>
<p>Besides the <code>default</code> output with the final feature vector output, you should see a bunch of intermediate feature maps, under keys starting with <code>InceptionV3/</code> (or whichever other architecture you select). These are 4D tensors with shape <code>[batch_size, feature_map_height, feature_map_width, num_features]</code>, so you might want to remove those middle dimensions by avg- or max-pooling over them before feeding this into classification.</p> | python|tensorflow|tensorflow-hub | 1 |
378,223 | 51,291,804 | Keras: different validation AUROC during training and on epoch end | <p>I'm getting different AUROC depending on when I calculate it. My code is </p>
<pre><code> def auc_roc(y_true, y_pred):
# any tensorflow metric
value, update_op = tf.metrics.auc(y_true, y_pred)
return update_op
model.compile(loss='binary_crossentropy', optimizer=optim, metrics=['accuracy', auc_roc])
my_callbacks = [roc_callback(training_data=(x_train, y_train),validation_data=(x_test,y_test))]
model.fit(x_train, y_train, validation_data=(x_test, y_test), callbacks=my_callbacks)
</code></pre>
<p>Where <code>roc_callback</code> is a Keras callback that calculates the AUROC at the end of each epoch using <code>roc_auc_score</code> from sklearn. I use the code that is defined <a href="https://stackoverflow.com/a/46844409/6832556">here</a>.</p>
<p>When I train the model, I get the following statistics:</p>
<pre><code> Train on 38470 samples, validate on 9618 samples
Epoch 1/15
38470/38470 [==============================] - auc_roc: 0.5116 - val_loss: 0.6899 - val_acc: 0.6274 - val_auc_roc: 0.5440
roc-auc_val: 0.5973
Epoch 2/15
38470/38470 [==============================] - auc_roc: 0.5777 - val_loss: 0.6284 - val_acc: 0.6870 - val_auc_roc: 0.6027
roc-auc_val: 0.6391
.
.
.
.
.
.
.
Epoch 12/15
38470/38470 [==============================] - auc_roc: 0.8754 - val_loss: 0.9569 - val_acc: 0.7747 - val_auc_roc: 0.8779
roc-auc_val: 0.6369
</code></pre>
<p>So how is the AUROC calculated during training going up with each epoch? Why is it different from the one calculated at the epoch end? </p> | <p>During training, the metrics are calculated "per batch".
And they keep updating for each new batch in some sort of "mean" between the current batch metrics and the previous results. </p>
<p>Now, your callback calculates on the "entire data", and only at the end. There will be normal differences between the two methods. </p>
<p>It's very common to see the next epoch start with a metric way better than the value shown for the last epoch, because the old metric includes in its mean value a lot of batches that weren't trained at that time. </p>
<p>You can perform a more precise comparison by calling <code>model.evaluate(x_test,y_test)</code>. Not sure if there will be conflicts by calling this "during" training, but you could train each epoch individually and call this between each epoch.</p>
<hr>
<p>Something strange:</p>
<p>There isn't any <code>y_pred</code> in your <code>roc_callback</code>. Are you calling a <code>model.predict()</code> inside it?</p> | python|tensorflow|keras | 2 |
378,224 | 51,371,835 | Replace None with NaN and ignore NoneType in Pandas | <p>I'm attempting to create a raw string variable from a pandas dataframe, which will eventually be written to a <em>.cfg</em> file, by firstly joining two columns together as shown below and avoiding <code>None</code>:</p>
<p>Section of <code>df</code>: </p>
<pre><code> command value
...
439 sensitivity "0.9"
440 cl_teamid_overhead_always 1
441 host_writeconfig None
...
</code></pre>
<p><code>code</code>:</p>
<pre><code>...
df = df['value'].replace('None', np.nan, inplace=True)
print df
df = df['command'].astype(str)+' '+df['value'].astype(str)
print df
cfg_output = '\n'.join(df.tolist())
print cfg_output
</code></pre>
<p>I've attempted to replace all the <code>None</code> values with <code>NaN</code> firstly so that <strong><em>no</em></strong> lines in <code>cfg_output</code> contain "None" as part of of the string. However, by doing so I seem to get a few undesired results. I made use of print statements to see what is going on.</p>
<p>It seems that <code>df = df['value'].replace('None', np.nan, inplace=True)</code>, simply outputs <code>None</code>.</p>
<p>It seems that <code>df = df['command'].astype(str)+' '+df['value'].astype(str)</code> and <code>cfg_output = '\n'.join(df.tolist())</code>, cause the following error:</p>
<pre><code>TypeError: 'NoneType' object has no attribute '__getitem__'
</code></pre>
<p>Therefore, I was thinking that by ignoring any occurrences of NaN, the code may run smoothly, although I'm unsure about how to do so using <code>Pandas</code></p>
<p>Ultimately, my <strong><em>desired output</em></strong> would be as followed:</p>
<pre><code>sensitivity "0.9"
cl_teamid_overhead_always 1
host_writeconfig
</code></pre> | <p>First of all, <code>df['value'].replace('None', np.nan, inplace=True)</code> returns <code>None</code> because you're calling the method with the <code>inplace=True</code> argument. This argument tells <code>replace</code> to not return anything but instead modify the original <code>dataframe</code> as it is. Similar to how <code>pop</code> or <code>append</code> work on lists. </p>
<p>With that being said, you can also get the desired output calling <code>fillna</code> with an empty string:</p>
<pre><code>import pandas as pd
import numpy as np
d = {
'command': ['sensitivity', 'cl_teamid_overhead_always', 'host_writeconfig'],
'value': ['0.9', 1, None]
}
df = pd.DataFrame(d)
# df['value'].replace('None', np.nan, inplace=True)
df = df['command'].astype(str) + ' ' + df['value'].fillna('').astype(str)
cfg_output = '\n'.join(df.tolist())
>>> print(cfg_output)
sensitivity 0.9
cl_teamid_overhead_always 1
host_writeconfig
</code></pre> | python|pandas|numpy|dataframe | 2 |
378,225 | 51,499,376 | How to display GroupBy Count as Bokeh vbar for categorical data | <p>I have a small issue creating a Bokeh <strong>vbar</strong> in 0.13.0
from a dataframe <code>groupby</code> <code>count</code> operation. The response <a href="https://stackoverflow.com/questions/46343429/how-use-bokeh-vbar-chart-parameter-with-groupby-object">here</a> was for a multi level group by where as mine isn't. </p>
<h3>Updates since posting</h3>
<ul>
<li>added sample data and code based on provided answer to see if issue is my code or something else</li>
</ul>
<h3>Outline</h3>
<p>The pandas dataframe contains survey responses </p>
<ul>
<li>Excellent</li>
<li>Good</li>
<li>Poor</li>
<li>Satisfactory</li>
<li>Very Good</li>
</ul>
<p>under columns <code>('ResponseID','RateGeneral','RateAccomodation','RateClean','RateServices')</code>and the dtype as been set as catagory. I want to display a bokeh vbar of the Response Count groupby using </p>
<pre><code>DemoDFCount = DemoDF.groupby('RateGeneral').count()
</code></pre>
<p>My bokeh code looks like this</p>
<pre><code>pTest= figure(title='Rating in General',plot_height=350)
pTest.vbar(width=0.9,source=DemoDFCount, x='RateGeneral',top='ResponseID')
show(pTest))
</code></pre>
<p>but doesn't produce any chart only a title and toolbar
<a href="https://i.stack.imgur.com/g6lgz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g6lgz.png" alt="Bokeh"></a></p>
<p>If I use pandas <code>DemoDFCount.plot.bar(legend=False)</code> I can plot something but how do I create this chart in bokeh?
<a href="https://i.stack.imgur.com/qIHjS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qIHjS.png" alt="dataframe bar plot"></a></p>
<h2>Sample data as json export</h2>
<p>50 rows of sample data from <code>DemoDF.to_json()</code></p>
<pre><code>'{"ResponseID":{"0":1,"1":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8,"8":9,"9":10,"10":11,"11":12,"12":13,"13":14,"14":15,"15":16,"16":17,"17":18,"18":19,"19":20,"20":21,"21":22,"22":23,"23":24,"24":25,"25":26,"26":27,"27":28,"28":29,"29":30,"30":31,"31":32,"32":33,"33":34,"34":35,"35":36,"36":37,"37":38,"38":39,"39":40,"40":41,"41":42,"42":43,"43":44,"44":45,"45":46,"46":47,"47":48,"48":49,"49":50},"RateGeneral":{"0":"Good","1":"Satisfactory","2":"Good","3":"Poor","4":"Good","5":"Satisfactory","6":"Excellent","7":"Good","8":"Good","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Satisfactory","13":"Excellent","14":"Satisfactory","15":"Very Good","16":"Satisfactory","17":"Excellent","18":"Very Good","19":"Excellent","20":"Satisfactory","21":"Good","22":"Satisfactory","23":"Excellent","24":"Satisfactory","25":"Good","26":"Excellent","27":"Very Good","28":"Good","29":"Very Good","30":"Good","31":"Satisfactory","32":"Very Good","33":"Very Good","34":"Very Good","35":"Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Good","41":"Satisfactory","42":"Very Good","43":"Very Good","44":"Poor","45":"Excellent","46":"Good","47":"Excellent","48":"Satisfactory","49":"Good"},"RateAccomodation":{"0":"Very Good","1":"Excellent","2":"Satisfactory","3":"Satisfactory","4":"Good","5":"Good","6":"Very Good","7":"Very Good","8":"Good","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Satisfactory","13":"Excellent","14":"Good","15":"Very Good","16":"Good","17":"Excellent","18":"Excellent","19":"Very Good","20":"Good","21":"Satisfactory","22":"Good","23":"Excellent","24":"Satisfactory","25":"Very Good","26":"Excellent","27":"Excellent","28":"Good","29":"Very Good","30":"Very Good","31":"Very Good","32":"Excellent","33":"Very Good","34":"Very Good","35":"Very Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Excellent","41":"Poor","42":"Very Good","43":"Very Good","44":"Poor","45":"Excellent","46":"Satisfactory","47":"Excellent","48":"Good","49":"Good"},"RateClean":{"0":"Excellent","1":"Excellent","2":"Satisfactory","3":"Good","4":"Excellent","5":"Very Good","6":"Very Good","7":"Excellent","8":"Excellent","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Good","13":"Good","14":"Excellent","15":"Excellent","16":"Good","17":"Excellent","18":"Excellent","19":"Excellent","20":"Good","21":"Very Good","22":"Poor","23":"Very Good","24":"Satisfactory","25":"Very Good","26":"Excellent","27":"Good","28":"Poor","29":"Good","30":"Excellent","31":"Good","32":"Good","33":"Very Good","34":"Satisfactory","35":"Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Very Good","41":"Satisfactory","42":"Excellent","43":"Excellent","44":"Very Good","45":"Excellent","46":"Good","47":"Excellent","48":"Good","49":"Excellent"},"RateServices":{"0":"Very Good","1":"Excellent","2":"Good","3":"Good","4":"Excellent","5":"Good","6":"Good","7":"Very Good","8":"Good","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Good","13":"Very Good","14":"Good","15":"Excellent","16":"Poor","17":"Excellent","18":"Excellent","19":"Excellent","20":"Good","21":"Good","22":"Very Good","23":"Excellent","24":"Satisfactory","25":"Very Good","26":"Excellent","27":"Very Good","28":"Good","29":"Excellent","30":"Very Good","31":"Excellent","32":"Good","33":"Excellent","34":"Very Good","35":"Very Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Very Good","41":"Satisfactory","42":"Excellent","43":"Excellent","44":"Good","45":"Excellent","46":"Very Good","47":"Excellent","48":"Good","49":"Very Good"}}'
</code></pre> | <p>The fact that it is multi-level in the other question is not really relevant. When you use a Pandas <code>GroupBy</code> as a data source for Bokeh, Bokeh uses the results of <code>group.describe</code> (which includes counts for each column per group) as the contents of the data source. Here is a complete example that shows Counts-per-Origin from the "cars" data set:</p>
<pre><code>from bokeh.io import show, output_file
from bokeh.plotting import figure
from bokeh.sampledata.autompg import autompg as df
output_file("groupby.html")
df.origin = df.origin.astype(str)
group = df.groupby('origin')
p = figure(plot_height=350, x_range=group, title="Count by Origin",
toolbar_location=None, tools="")
# using yr_count, but count for any column would work
p.vbar(x='origin', top='yr_count', width=0.8, source=group)
p.y_range.start = 0
p.xgrid.grid_line_color = None
show(p)
</code></pre>
<p><a href="https://i.stack.imgur.com/Xeqnpm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xeqnpm.png" alt="enter image description here"></a></p> | bokeh|pandas-groupby | 1 |
378,226 | 51,297,668 | Defining a default argument after with None: what if it's an array? | <p>I'm passing an argument to a function such that I want to delay giving the default parameter, in the usual way:</p>
<pre><code>def f(x = None):
if x == None:
x = ...
</code></pre>
<p>The only problem is that <code>x</code> is likely to be a numpy array. Then <code>x == None</code> returns a boolean array, which I can't condition on. The compiler suggests to use <code>.any()</code> or <code>.all()</code></p>
<p>But if I write</p>
<pre><code>def f(x = None):
if (x == None).any():
x = ...
</code></pre>
<p>this won't work if <code>x</code> goes to its default value, because then <code>None == None</code> is a Boolean, which has no <code>.any()</code> or <code>.all()</code> methods. What's my move here?</p> | <p>When comparing against <strong><code>None</code></strong>, it is a good practice to use <strong><code>is</code></strong> as opposed to <code>==</code>. Usually it doesn't make a difference, but since objects are free to implement equality any way they see fit, it is not always a reliable option.</p>
<p>Unfortunately, this is one of those cases where <code>==</code> doesn't cut it, since comparing to numpy arrays returns a boolean mask based on the condition. Luckily, there is only a single instance of <strong><code>None</code></strong> in any given Python program, so we can actually check the identity of an object using the <strong><code>is</code></strong> operator to figure out if it is <strong><code>None</code></strong> or not.</p>
<pre><code>>>> None is None
True
>>> np.array([1,2,3]) is None
False
</code></pre>
<p>So no need for <strong><code>any</code></strong> or <strong><code>all</code></strong>, you can update your function to something like:</p>
<pre><code>def f(x=None):
if x is None:
print('None')
else:
print('Not none')
</code></pre>
<p>In action:</p>
<pre><code>>>> f()
None
>>> f(np.array([1,2,3]))
Not none
</code></pre> | python|numpy|parameters|arguments | 7 |
378,227 | 51,353,928 | Extract string if match the value in another list | <p>I want to get the value of the lookup list instead of a boolean. I have tried the following codes:</p>
<pre><code>val = pd.DataFrame(['An apple','a Banana','a cat','a dog'])
lookup = ['banana','dog']
# I tried the follow code:
val.iloc[:,0].str.lower().str.contains('|'.join(lookup))
# it returns:
0 False
1 True
2 False
3 True
Name: 0, dtype: bool
</code></pre>
<p>What I want:</p>
<pre><code>0 False
1 banana
2 False
3 dog
</code></pre>
<p>Any help is appreciated.</p> | <p>You can use <strong><code>extract</code></strong> instead of <strong><code>contains</code></strong>, and <code>fillna</code> with <code>False</code>:</p>
<pre><code>import re
p = rf'\b({"|".join(lookup)})\b'
val[0].str.extract(p, expand=False, flags=re.I).fillna(False)
0
0 False
1 banana
2 False
3 dog
</code></pre> | python|pandas | 10 |
378,228 | 51,148,914 | pandas multiindex set_labels | <p>I have a pandas multiindex like this one</p>
<pre><code>result.index
MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]],
labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]],
names=['ref', None])
</code></pre>
<p>And I want to change the second label by this one</p>
<pre><code>new_label
[-0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4]
</code></pre>
<p>so the result should be </p>
<pre><code>result.index
MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]],
labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [-0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4]],
names=['ref', None])
</code></pre>
<p>I tried with</p>
<pre><code>result.index.set_labels(labels=new_label,level=1)
</code></pre>
<p>But instead I get this</p>
<pre><code>MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]],
labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
names=['wnd dir ref', None])
</code></pre>
<p>The labels are fulfilled with 0</p>
<p>What is wrong or missing?</p> | <p>If want use <code>set_label</code> need same types, here integers (it seems bug):</p>
<pre><code>#test if working with integers
mux1 = mux.set_labels((np.array(new_label) * 100).astype(int), level=1)
print (mux1)
MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]],
labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [-90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40]],
names=['ref', None])
</code></pre>
<hr>
<pre><code>mux = pd.MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]],
labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]],
names=['ref', None])
df = pd.DataFrame([0] * 55, index=mux, columns=['a'])
</code></pre>
<p>Possible solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> for new 3 level MultiIndex and remove second one by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>:</p>
<pre><code>df = df.set_index([new_label], append=True).reset_index(level=1, drop=True)
</code></pre>
<p>Or create new MultiIndex:</p>
<pre><code>df.index = [df.index.get_level_values(0), new_label]
print (df.head(10))
a
ref
10 -0.90 0
-0.85 0
-0.80 0
-0.75 0
-0.70 0
-0.65 0
-0.60 0
-0.55 0
-0.50 0
-0.45 0
</code></pre>
<p>Also if need set <code>MultiIndex</code> names:</p>
<pre><code>df.index = pd.MultiIndex.from_arrays([df.index.get_level_values(0),
new_label], names=('ref','new'))
print (df.head(10))
a
ref new
10 -0.90 0
-0.85 0
-0.80 0
-0.75 0
-0.70 0
-0.65 0
-0.60 0
-0.55 0
-0.50 0
-0.45 0
</code></pre> | pandas|label|multi-index | 2 |
378,229 | 51,246,827 | Renaming columns in a Dataframe given that column contains data in a loop | <p><strong>Scenario:</strong> I have a list of dataframes. I am trying to rename the columns and change their order, but the column names do not exactly match, for example: a column might be "iterationlist" or "iteration".</p>
<p>I tried a loop inside a loop to read all the columns and if the name contains what I need, change the name of that column, but I get the error:</p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>import pandas as pd
import os
from Tkinter import Tk
from tkFileDialog import askdirectory
from os import listdir
from os.path import isfile, join
import glob
# Get content
mypath = "//DGMS/Desktop/uploaded"
all_files = glob.glob(os.path.join(mypath, "*.xls*"))
contentdataframes = [pd.read_excel(f).assign(Datanumber=os.path.basename(f).split('.')[0].split('_')[0], ApplyOn='')
for f in all_files]
#get list of dates and put to dfs
for dfs in contentdataframes:
dfs.rename(index=str, columns={[col for col in dfs.columns if 'iteration' in col]: "iterationlistfinal"})
</code></pre>
<p><strong>Question:</strong> What is the proper way to do this? </p> | <p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a> for get columns names by substring and then reorder columns by subset with join both lists:</p>
<pre><code>contentdataframes = []
for f in all_files:
df = pd.read_excel(f)
df['Datanumber'] = os.path.basename(f).split('.')[0].split('_')[0]
df['ApplyOn']= ''
mask = df.columns.str.contains('iteration')
c1 = df.columns[mask].tolist()
c2 = df.columns[~mask].tolist()
df = df[c1 + c2]
contentdataframes.append(df)
</code></pre> | python|pandas|dataframe | 2 |
378,230 | 51,304,610 | Pandas: shifting columns depending on if NaN or not | <p>I have a dataframe like so:</p>
<pre><code>phone_number_1_clean phone_number_2_clean phone_number_3_clean
NaN NaN 8546987
8316589 8751369 NaN
4569874 NaN 2645981
</code></pre>
<p>I would like <code>phone_number_1_clean</code> to be as populated as possible. This will require shifting either <code>phone_number_2_clean</code> or <code>phone_number_3_clean</code> to <code>phone_number_1_clean</code> and vice versa meaning getting <code>phone_number_2_clean</code> as populated as possible if <code>phone_number_1_clean</code> is populated etc. </p>
<p>The output should look something like:</p>
<pre><code>phone_number_1_clean phone_number_2_clean phone_number_3_clean
8546987 NaN NaN
8316589 8751369 NaN
4569874 2645981 NaN
</code></pre>
<p>I might be able to do it <code>np.where</code>statements but could be messy.</p>
<p>The approach would preferably be vectorised as will be applied to large-ish dataframes.</p> | <p>Use:</p>
<pre><code>#for each row remove NaNs and create new Series - rows in final df
df1 = df.apply(lambda x: pd.Series(x.dropna().values), axis=1)
#if possible different number of columns like original df is necessary reindex
df1 = df1.reindex(columns=range(len(df.columns)))
#assign original columns names
df1.columns = df.columns
print (df1)
phone_number_1_clean phone_number_2_clean phone_number_3_clean
0 8546987 NaN NaN
1 8316589 8751369 NaN
2 4569874 2645981 NaN
</code></pre>
<p>Or:</p>
<pre><code>s = df.stack()
s.index = [s.index.get_level_values(0), s.groupby(level=0).cumcount()]
df1 = s.unstack().reindex(columns=range(len(df.columns)))
df1.columns = df.columns
print (df1)
phone_number_1_clean phone_number_2_clean phone_number_3_clean
0 8546987 NaN NaN
1 8316589 8751369 NaN
2 4569874 2645981 NaN
</code></pre>
<p>Or a bit changed <a href="https://stackoverflow.com/a/47898659"><code>justify</code></a> function:</p>
<pre><code>def justify(a, invalid_val=0, axis=1, side='left'):
"""
Justifies a 2D array
Parameters
----------
A : ndarray
Input array to be justified
axis : int
Axis along which justification is to be made
side : str
Direction of justification. It could be 'left', 'right', 'up', 'down'
It should be 'left' or 'right' for axis=1 and 'up' or 'down' for axis=0.
"""
if invalid_val is np.nan:
mask = pd.notnull(a) #changed to pandas notnull
else:
mask = a!=invalid_val
justified_mask = np.sort(mask,axis=axis)
if (side=='up') | (side=='left'):
justified_mask = np.flip(justified_mask,axis=axis)
out = np.full(a.shape, invalid_val, dtype=object)
if axis==1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
</code></pre>
<hr>
<pre><code>df = pd.DataFrame(justify(df.values, invalid_val=np.nan),
index=df.index, columns=df.columns)
print (df)
phone_number_1_clean phone_number_2_clean phone_number_3_clean
0 8546987 NaN NaN
1 8316589 8751369 NaN
2 4569874 2645981 NaN
</code></pre>
<p><strong>Performance</strong>:</p>
<pre><code>#3k rows
df = pd.concat([df] * 1000, ignore_index=True)
In [442]: %%timeit
...: df1 = df.apply(lambda x: pd.Series(x.dropna().values), axis=1)
...: #if possible different number of columns like original df is necessary reindex
...: df1 = df1.reindex(columns=range(len(df.columns)))
...: #assign original columns names
...: df1.columns = df.columns
...:
1.17 s ± 10.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [443]: %%timeit
...: s = df.stack()
...: s.index = [s.index.get_level_values(0), s.groupby(level=0).cumcount()]
...:
...: df1 = s.unstack().reindex(columns=range(len(df.columns)))
...: df1.columns = df.columns
...:
...:
5.88 ms ± 74.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [444]: %%timeit
...: pd.DataFrame(justify(df.values, invalid_val=np.nan),
index=df.index, columns=df.columns)
...:
941 µs ± 131 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre> | python|pandas | 6 |
378,231 | 51,454,967 | The truth value of a Series is ambiguous Pandas | <p>What's the problem with this code? I used many comparison lambda function on the dataframe,but this one returns <code>ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', u'occurred at index 2')</code> error.</p>
<p>I searched about it and found many question asked before about it,but none of them fit my problem.</p>
<p>My code:</p>
<pre><code>def Return(close,pClose):
i = ((close - pClose) / close) * 100
if (i > 0):
return 1
if (i < 0):
return 0
df['return'] = df.apply(lambda y:Return(close=df['Close'], pClose=df['pClose']),axis=1)
</code></pre> | <p>The Problem with your code is that you pass the whole column of the dataframe to your function:</p>
<pre><code>df.apply(lambda y:Return(close=df['Close'], pClose=df['pClose']),axis=1)
</code></pre>
<p>In the function you are calculating a new value i which is in fact a column:</p>
<pre><code>i = ((close - pClose) / close) * 100
</code></pre>
<p>In the comparison statement thencannot decide how to evaluate what you are trying to do because it gets a column as input:</p>
<pre><code>if (i > 0):
</code></pre>
<p>So I think what you want is something like:</p>
<pre><code>df['return'] = df.apply(lambda y:Return(close=y['Close'], pClose=y['pClose']),axis=1)
</code></pre> | python|pandas | 3 |
378,232 | 51,292,318 | TFrecords occupy more space than original JPEG images | <p>I'm trying to convert my Jpeg image set into to TFrecords. But TFrecord file is taking almost 5x more space than the image set. After a lot of googling, I learned that when JPEG are written into TFrecords, they aren't JPEG anymore. However I haven't come across an understandable code solution to this problem. Please tell me what changes ought to be made in the code below to write JPEG to Tfrecords.</p>
<pre><code>def print_progress(count, total):
pct_complete = float(count) / total
msg = "\r- Progress: {0:.1%}".format(pct_complete)
sys.stdout.write(msg)
sys.stdout.flush()
def wrap_int64(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def wrap_bytes(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def convert(image_paths , labels, out_path):
# Args:
# image_paths List of file-paths for the images.
# labels Class-labels for the images.
# out_path File-path for the TFRecords output file.
print("Converting: " + out_path)
# Number of images. Used when printing the progress.
num_images = len(image_paths)
# Open a TFRecordWriter for the output-file.
with tf.python_io.TFRecordWriter(out_path) as writer:
# Iterate over all the image-paths and class-labels.
for i, (path, label) in enumerate(zip(image_paths, labels)):
# Print the percentage-progress.
print_progress(count=i, total=num_images-1)
# Load the image-file using matplotlib's imread function.
img = imread(path)
# Convert the image to raw bytes.
img_bytes = img.tostring()
# Create a dict with the data we want to save in the
# TFRecords file. You can add more relevant data here.
data = \
{
'image': wrap_bytes(img_bytes),
'label': wrap_int64(label)
}
# Wrap the data as TensorFlow Features.
feature = tf.train.Features(feature=data)
# Wrap again as a TensorFlow Example.
example = tf.train.Example(features=feature)
# Serialize the data.
serialized = example.SerializeToString()
# Write the serialized data to the TFRecords file.
writer.write(serialized)
</code></pre>
<p>Edit: Can someone please answer this ?!!</p> | <p>Instead of converting image to array and back to bytes, we can just use inbuilt <code>open</code> function to get the bytes. That way, compressed image will be written into TFRecord. </p>
<p>Replace these two lines</p>
<pre><code>img = imread(path)
img_bytes = img.tostring()
</code></pre>
<p>with </p>
<pre><code>img_bytes = open(path,'rb').read()
</code></pre>
<p>Reference :</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/9675" rel="noreferrer">https://github.com/tensorflow/tensorflow/issues/9675</a></p> | tensorflow|tfrecord | 6 |
378,233 | 51,254,282 | API download data, recommendations? | <p>I am trying to decode data from an API, I just cannot think of a clean way to extract the value and time values. I been trying to do string manipulations, but ends up very complex. </p>
<pre><code>{"max_scale": "0", "min_scale": "0", "graph_label": "Light Level", "average": "1", "length_of_time": "3600", "upper_warn": "1000", "lower_warn": "30", "cached": false, "values":
[{"value": 0.0, "time": 1531170219},
{"value": 0.0, "time": 1531170159},
{"value": 0.0, "time": 1531170099},
{"value": 0.0, "time": 1531170039},
{"value": 0.0, "time": 1531169979},
{"value": 0.0, "time": 1531169919},
{"value": 0.0, "time": 1531169859},
{"value": 0.0, "time": 1531169799},
{"value": 0.0, "time": 1531169739},
{"value": 0.0, "time": 1531169679},
{"value": 0.0, "time": 1531169619},
{"value": 0.0, "time": 1531166679}],
"timestamp_to": "1531170222.798", "format_string": "%f Lux"}
</code></pre> | <p>This is in JSON format. Use the python <a href="https://docs.python.org/3/library/json.html" rel="nofollow noreferrer">json</a> encoder/decoder to load this data. It will turn it into a dictionary, and something like</p>
<pre><code>my_json_dict['values']
</code></pre>
<p>will return you that list.</p> | python|database|string|pandas|extract | 0 |
378,234 | 51,541,386 | Pandas - min and max of a column up until each line | <p>I have a dataframe like this:</p>
<pre><code>pd.DataFrame({'group': {0: 1, 1: 1, 2: 1, 3: 1, 4: 2, 5: 2, 6: 2}, 'year': {0: 2007, 1: 2008, 2: 2009, 3: 2010, 4: 2006, 5: 2007, 6: 2008}, 'amount': {0: 2.0, 1: -4.0, 2: 5, 3: 7.0, 4: 8.0, 5: -10.0, 6: 12.0}}])
group year amount
0 1 2007 2
1 1 2008 -4
2 1 2009 5
3 1 2010 7
4 2 2006 8
5 2 2007 -10
6 2 2008 12
</code></pre>
<p>I want to add min, max, number of years that amount is negative,number of years that amount is positive for each group, up until each year (inclusive). My ideal dataframe looks like this</p>
<pre><code> group year amount min_utd max_utd no_n_utd no_p_utd
0 1 2007 2 2 2 0 1
1 1 2008 -4 -4 2 1 1
2 1 2009 5 -4 5 1 2
3 1 2010 7 -4 7 1 3
4 2 2006 8 8 8 0 1
5 2 2007 -10 -10 8 1 1
6 2 2008 12 -10 12 1 2
</code></pre>
<p>I am only aware of <code>agg</code> with which you can do for the whole group, or <code>rolling</code> when you do for a sliding window, but I dont know how to calculate from the beginning up to each line.</p> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cummax.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.cummax</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cummin.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.cummin</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cumsum.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.cumsum</code></a> with comparing by <code>lt</code> (<code><</code>) and <code>ge</code> (>=):</p>
<pre><code>df[['min_utd','max_utd']] = df.groupby('group')['amount'].agg(['cummin','cummax'])
df['no_n_utd'] = df['amount'].lt(0).astype(int).groupby(df['group']).cumsum()
df['no_p_utd'] = df['amount'].ge(0).astype(int).groupby(df['group']).cumsum()
print (df)
group year amount min_utd max_utd no_n_utd no_p_utd
0 1 2007 2 2 2 0 1
1 1 2008 -4 -4 2 1 1
2 1 2009 5 -4 5 1 2
3 1 2010 7 -4 7 1 3
4 2 2006 8 8 8 0 1
5 2 2007 -10 -10 8 1 1
6 2 2008 12 -10 12 1 2
</code></pre>
<p>Another solution with same principe but custom function:</p>
<pre><code>def f(x):
a = x.cummin()
b = x.cummax()
c = x.lt(0).cumsum()
d = x.ge(0).cumsum()
return pd.DataFrame({'min_utd':a, 'max_utd':b, 'no_n_utd':c, 'no_p_utd':d})
df = df.join(df.groupby('group')['amount'].apply(f))
print (df)
group year amount min_utd max_utd no_n_utd no_p_utd
0 1 2007 2 2 2 0 1
1 1 2008 -4 -4 2 1 1
2 1 2009 5 -4 5 1 2
3 1 2010 7 -4 7 1 3
4 2 2006 8 8 8 0 1
5 2 2007 -10 -10 8 1 1
6 2 2008 12 -10 12 1 2
</code></pre> | python|pandas | 2 |
378,235 | 51,314,650 | Pandas groupby function returns NaN values | <p>I have a list of people with fields unique_id, sex, born_at (birthday) and I’m trying to group by sex and age bins, and count the rows in each segment.</p>
<p>Can’t figure out why I keep getting NaN or 0 as the output for each segment. </p>
<p>Here’s the latest approach I've taken...</p>
<p>Data sample:</p>
<pre><code>|---------------------|------------------|------------------|
| unique_id | sex | born_at |
|---------------------|------------------|------------------|
| 1 | M | 1963-08-04 |
|---------------------|------------------|------------------|
| 2 | F | 1972-03-22 |
|---------------------|------------------|------------------|
| 3 | M | 1982-02-10 |
|---------------------|------------------|------------------|
| 4 | M | 1989-05-02 |
|---------------------|------------------|------------------|
| 5 | F | 1974-01-09 |
|---------------------|------------------|------------------|
</code></pre>
<p>Code:</p>
<pre><code>df[‘num_people’]=1
breakpoints = [18,25,35,45,55,65]
df[[‘sex’,’born_at’,’num_people’]].groupby([‘sex’,pd.cut(df.born_at.dt.year, bins=breakpoints)]).agg(‘count’)
</code></pre>
<p>I’ve tried summing as the agg type, removing NaNs from the data series, pivot_table using the same pd.cut function but no luck. Guessing there’s also probably a better way to do this that doesn’t involve creating a column of 1s.</p>
<p>Desired output would be something like this...
<a href="https://i.stack.imgur.com/J09YT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J09YT.png" alt="enter image description here"></a></p>
<p>The extra born_at column isn't necessary in the output and I'd also like the age bins to be 18 to 24, 25 to 34, etc. instead of 18 to 25, 25 to 35, etc. but I'm not sure how to specify that either.</p> | <p>I think you missed the calculation of the current age. The ranges you define for splitting the bithday years only make sense when you use them for calculating the current age (or all grouped cells will be nan or zero respectively because the lowest value in your sample is 1963 and the right-most maximum is 65). So first of all you want to calculate the age:</p>
<pre><code>datetime.now().year-df.birthday.dt.year
</code></pre>
<p>This information then can be used to group the data (which are previously grouped by gender):</p>
<pre><code>df.groupby(['gender', pandas.cut(datetime.now().year-df.birthday.dt.year, bins=breakpoints)]).agg('count')
</code></pre>
<p>In order to get rid of the nan cells you simply do a fillna(0) like this:</p>
<pre><code>df.groupby(['gender', pandas.cut(datetime.now().year-df.birthday.dt.year, bins=breakpoints)]).agg('count').fillna(0).rename(columns={'birthday':'count'})
</code></pre> | python|pandas|pandas-groupby | 1 |
378,236 | 51,419,237 | TensorFlow FailedPreconditionError: iterator has not been initialized | <p>I want to display the values of tensors.</p>
<p>Here is my code:</p>
<pre><code>#some code here
data = [data_tensor for data_tensor in data_dict.items()]
for i in data:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print (sess.run(i[1]))
print('_'*100)
</code></pre>
<p>However, I got the error:</p>
<pre><code>FailedPreconditionError (see above for traceback):
GetNext() failed because the iterator has not been initialized.
Ensure that you have run the initializer operation for this iterator
before getting the next element.
</code></pre>
<p>How to solve the problem?</p>
<p>Thank you very much.</p> | <p>It looks like you have a dataset iterator that has not been initialized. A dataset iterator is not a variable, hence does not get initialized with <code>tf.global_variables_intializer()</code>. </p>
<p>You have to initialize it explicitly by calling <code>sess.run(iterator.initializer)</code> on whatever dataset iterator you created (e.g. with <code>iterator = dataset.make_initializable_iterator()</code>. </p>
<hr>
<p>Additionally, note that each dataset iteration (running the <code>GetNext</code> node) yields a <em>complete element</em> of the dataset, even if you only care about a subset of the element. If <code>data_dict</code> is the output of an iteration (created with <code>data_dict = iterator.get_next()</code>), doing <code>print(sess.run(i[1]))</code>, while only giving you one of the k,v pairs in the dictionary, actually yields the whole <code>data_dict</code>. I expect that this pipeline would not give you the output you expect unless you reinitialize the iterator within the for loop.</p>
<p>To make what I'm saying more concrete, if you had a dataset created as follows, you would expect the following iteration outputs:</p>
<pre><code>## dataset: [{'a':0, 'b':10}, {'a':1, 'b':11}, {'a':2, 'b':12}, ...]
dataset = tf.data.Dataset.range(10).map(lambda x: {'a': x, 'b': x + 10})
iterator = dataset.make_initializable_iterator()
next_elem = iterator.get_next()
with tf.Session() as sess:
sess.run(iterator.initializer)
print(sess.run(next_elem['a'])) # 0
print(sess.run(next_elem['a'])) # 1
print(sess.run(next_elem['b'])) # 12
</code></pre> | python|tensorflow | 5 |
378,237 | 51,375,255 | Bar plot from dataframe | <p>I have a data frame that looks something like this. </p>
<pre><code>print (df)
a b
0 1 5896
1 1 4000
2 1 89647
3 2 54
4 2 3568
5 2 48761
6 3 5896
7 3 2800
8 3 5894
</code></pre>
<p><a href="https://i.stack.imgur.com/sazhb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sazhb.png" alt="enter image description here"></a></p>
<p>And I want to make a bar plot. That looks like this. </p>
<p><a href="https://i.stack.imgur.com/hoi7W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hoi7W.png" alt="enter image description here"></a></p>
<p>I tried with <code>groupby.()</code>but it only prints only one value of 1 one values of 2 etc... </p>
<pre><code>a = df_result.groupby(['column1'])['column2'].mean()
a.plot.bar()
plt.show()
</code></pre>
<p>Would appreciate some guidance how to solve the problem, so I would have all of the values in a chart. </p> | <p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> first for reshape data:</p>
<pre><code>a = df.set_index(['a',df.groupby('a').cumcount()])['b'].unstack()
print (a)
0 1 2
a
1 5896 4000 89647
2 54 3568 48761
3 5896 2800 5894
a.plot.bar()
</code></pre>
<p><a href="https://i.stack.imgur.com/hNFJ2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hNFJ2.png" alt="graph"></a></p> | python|pandas|plot | 3 |
378,238 | 51,409,861 | Perform a 'join' on two numpy arrays | <p>I have two numpy array's that look like the following:</p>
<pre><code>a = np.array([[1, 10], [2, 12], [3, 5]])
b = np.array([[1, 0.78], [3, 0.23]])
</code></pre>
<p>The first number in the list is the id parameter, and the second one is a value. I'm looking to combine them. The expected output to be equal to this:</p>
<pre><code>np.array([1, 10, 0.78], [2, 12, 0], [3, 5, 0.23])
</code></pre>
<p>Is there a function (or combination of functions that can do this for me? Any help is greatly appreciated.</p>
<p>If an object is not found, a 0 is put in it's place.</p> | <p>You are using the the first element like a <code>key</code> of a dictionary or an index of a Pandas series. So I used those tools which are better suited for the combination you are looking to do. I then convert back to the array you are looking for.</p>
<pre><code>import pandas as pd
import numpy as np
a = np.array([[1, 10], [2, 12], [3, 5]])
b = np.array([[1, 0.78], [3, 0.23]])
pd.concat(
map(pd.Series, map(dict, (a, b))), axis=1
).fillna(0).reset_index().values
array([[ 1. , 10. , 0.78],
[ 2. , 12. , 0. ],
[ 3. , 5. , 0.23]])
</code></pre>
<p>Notes:</p>
<ol>
<li>I map <code>dict</code> and <code>pd.Series</code> on the iterable <code>(a, b)</code></li>
<li>I pass those to <code>pd.concat</code> which produces a Pandas DataFrame</li>
<li>Fill in missing values with <code>0</code></li>
<li>Reset the index to get back those keys of yours</li>
<li>Get at just the values</li>
</ol>
<hr>
<p>If you have another array</p>
<pre><code>a = np.array([[1, 10], [2, 12], [3, 5]])
b = np.array([[1, 0.78], [3, 0.23]])
c = np.array([[1, 3.14], [2, 3.14]])
pd.concat(
map(pd.Series, map(dict, (a, b, c))), axis=1
).fillna(0).reset_index().values
array([[ 1. , 10. , 0.78, 3.14],
[ 2. , 12. , 0. , 3.14],
[ 3. , 5. , 0.23, 0. ]])
</code></pre>
<hr>
<p>If you want to quickly convert you arrays to the Pandas series<br>
Notice that I wrote to new names <code>a_</code>, <code>b_</code>, and <code>c_</code> to avoid overwriting your other names</p>
<pre><code>a_, b_, c_ = map(pd.Series, map(dict, (a, b, c)))
</code></pre>
<hr>
<p>To get a DataFrame</p>
<pre><code>df = pd.concat(map(pd.Series, map(dict, (a, b, c))), axis=1).fillna(0)
df
0 1 2
1 10 0.78 3.14
2 12 0.00 3.14
3 5 0.23 0.00
</code></pre> | python|python-3.x|pandas|numpy|data-manipulation | 2 |
378,239 | 51,485,042 | set_printoptions for numpy array doesn't work for numpy ndarray? | <p>I'm trying to use <code>set_printoptions</code> from the answer to the question <a href="https://stackoverflow.com/questions/2891790/how-to-pretty-printing-a-numpy-array-without-scientific-notation-and-with-given">How to pretty-printing a numpy.array without scientific notation and with given precision?</a></p>
<p>But I get this error:</p>
<pre><code>Traceback (most recent call last):
File "neural_network.py", line 57, in <module>
output.set_printoptions(precision=3)
AttributeError: 'numpy.ndarray' object has no attribute 'set_printoptions'
</code></pre>
<p>Apparently, not all <code>numpy</code> arrays are created equal, and what works for a regular <code>numpy.array</code> doesn't work for a <code>numpy.ndarray</code>.</p>
<p>How can I format a <code>numpy.ndarray</code> for priting such as to remove scientific notation?</p>
<p><em>UPDATE</em></p>
<p>Changing the call to <code>numpy.set_printoptions()</code> removes the error, but has no effect on the print format of the ndarray contents.</p> | <p>Try <code>numpy.array2string</code> which takes <code>ndarray</code> as input and you can set precision.</p>
<p>Scroll down in below documentation link for examples.</p>
<p><a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.array2string.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.array2string.html</a></p> | python|arrays|numpy | 2 |
378,240 | 51,529,463 | Debug Pytorch Optimizer | <p>When I run <code>optimizer.step</code> on my code, I get this error</p>
<p>RuntimeError: sqrt not implemented for 'torch.LongTensor'</p>
<pre><code>C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k)
186 # but it's overkill for just that one bit of state.
187 def magic_deco(arg):
--> 188 call = lambda f, *a, **k: f(*a, **k)
189
190 if callable(arg):
C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magics\execution.py in time(self, line, cell, local_ns)
1178 else:
1179 st = clock2()
-> 1180 exec(code, glob, local_ns)
1181 end = clock2()
1182 out = None
<timed exec> in <module>()
C:\Program Files\Anaconda3\lib\site-packages\torch\optim\adam.py in step(self, closure)
98 denom = max_exp_avg_sq.sqrt().add_(group['eps'])
99 else:
--> 100 denom = exp_avg_sq.sqrt().add_(group['eps'])
101
102 bias_correction1 = 1 - beta1 ** state['step']
RuntimeError: sqrt not implemented for 'torch.LongTensor'
</code></pre>
<p><strong>I am using my own loss function. My question is how will I debug this error? Is there a quick way to see the type of all my variables? I am manually doing it and all of them are type float (including the output of my custom loss). I can't figure out why we are even getting an error related to a LongTensor. How does the optimizer.step function work in PyTorch?</strong></p>
<p>Just in case, below is most of the code.
This is the model:</p>
<pre><code>class LSTM(nn.Module):
def __init__(self, mel_channels=40, frames=81, hidden_dim=768, proj_dim=256):
super(LSTM, self).__init__()
self.hidden_dim = hidden_dim
self.mel_channels = mel_channels
self.frames = frames
self.proj_dims = proj_dim
weight = torch.tensor([10])
bias = torch.tensor([-5])
self.w = nn.Parameter(weight)
self.b = nn.Parameter(bias)
# The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm1 = nn.LSTM(mel_channels, hidden_dim, batch_first=False)
print("here1")
self.lstm2 = nn.LSTM(proj_dim, hidden_dim, batch_first=False)
self.lstm3 = nn.LSTM(proj_dim, hidden_dim, batch_first=False)
self.lstms = [self.lstm1, self.lstm2, self.lstm3]
self.proj1 = nn.Linear(hidden_dim, proj_dim)
self.proj2 = nn.Linear(hidden_dim, proj_dim)
self.proj3 = nn.Linear(hidden_dim, proj_dim)
self.projs = [self.proj1, self.proj2, self.proj3]
def init_states(self, batchsize):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return [(torch.zeros(1, batchsize, self.hidden_dim),
torch.zeros(1, batchsize, self.hidden_dim)),
(torch.zeros(1, batchsize, self.hidden_dim),
torch.zeros(1, batchsize, self.hidden_dim)),
(torch.zeros(1, batchsize, self.hidden_dim),
torch.zeros(1, batchsize, self.hidden_dim)),
]
def forward(self, inputs, states=None):
time, batchsize, inputdim = list(inputs.shape)
if states is None:
states = self.init_states(batchsize)
output = inputs
print(output.type())
for i in range(3):
print(output.type())
output, state = self.lstms[i](output, states[i])
output = self.projs[i](output)
# perform normalization on this output here
output = output[-1]
print(output.type())
output = F.normalize(output, p=2, dim=-1)
print(output.type())
self.state = state
print(output.type())
return output
def get_w(self):
print(get_w.type())
return(self.w)
def get_b(self):
print(get_b.type())
return(self.b)
def get_state(self):
print(get_state())
return(self.state)
</code></pre>
<p>This is the custom loss:</p>
<pre><code>class CustomLoss(_Loss):
def __init__(self, size_average=True, reduce=True):
super(CustomLoss, self).__init__(size_average, reduce)
def forward(self, S, N, M, type='softmax',):
return self.loss_cal(S, N, M, type)
def loss_cal(self, S, N, M, type="softmax",):
self.A = torch.cat([S[i * M:(i + 1) * M, i:(i + 1)]
for i in range(N)], dim=0)
if type == "softmax":
self.B = torch.log(torch.sum(torch.exp(S.float()), dim=1, keepdim=True) + 1e-8)
total = torch.abs(torch.sum(self.A - self.B))
else:
raise AssertionError("loss type should be softmax or contrast !")
return total
</code></pre>
<p>Finally, this is the main file</p>
<pre><code>model=LSTM()
optimizer = optim.Adam(list(model.parameters()), lr=LEARNING_RATE)
model = model.to(device)
best_loss = 100.
generator = SpeakerVerificationDataset()
dataloader = DataLoader(generator, batch_size=4,
shuffle=True, num_workers=0)
loss_history = []
update_counter = 1
for epoch in range(NUM_EPOCHS):
print("Epoch # : ", epoch + 1)
for step in range(STEPS_PER_EPOCH):
# get batch dataset
for i_batch, sample_batched in enumerate(dataloader):
print(sample_batched['MelData'].size())
inputs = sample_batched['MelData'].float()
inputs=sample_batched['MelData'].view(180, M*N, 40).float()
print((inputs.size()))
inputs = inputs
#print(here)
# remove previous gradients
optimizer.zero_grad()
# get gradients and loss at this iteration
#predictions,state,w,b = model(inputs)
predictions = model(inputs)
w = model.w
b = model.b
predictions = similarity(output=predictions,w=w,b=b)
#loss = CustomLoss()
S = predictions
loss_func = CustomLoss()
loss = loss_func.loss_cal(S=S,N=N,M=M)
loss.backward()
# update the weights
print("start optimizing")
optimizer.step()
loss_history.append(loss.item())
print(update_counter, ":", loss_history[-1])
update_counter += 1
print()
# save the weights
torch.save(model.state_dict(), CHECKPOINT_PATH)
print("Saving weights")
print()
print()
</code></pre> | <p>The error comes from here:</p>
<pre><code>weight = torch.tensor([10])
bias = torch.tensor([-5])
self.w = nn.Parameter(weight)
self.b = nn.Parameter(bias)
</code></pre>
<p>Had to change it to</p>
<pre><code>weight = torch.tensor([10.0])
bias = torch.tensor([-5.0])
self.w = nn.Parameter(weight)
self.b = nn.Parameter(bias)
</code></pre> | pytorch | 1 |
378,241 | 51,246,823 | Create a line graph per bin in Python 3 | <p>I have a dataframe called 'games':</p>
<pre><code>Game_id Goals P_value
1 2 0.4
2 3 0.321
45 0 0.64
</code></pre>
<p>I need to split the P value to 0.05 steps, bin the rows per P value and than create a line graph that shows the sum per p value.</p>
<p>What I currently have:</p>
<pre><code>games.set_index('p value', inplace=True)
games.sort_index()
np.cumsum(games['goals']).plot()
</code></pre>
<p>But I get this:</p>
<p><a href="https://i.stack.imgur.com/BJyXr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BJyXr.png" alt="enter image description here"></a></p>
<p>No matter what I tried, I couldn't group the P values and show the sum of goals per P value..
I also tried to use <code>matplotlib.pyplot</code> but than I couldn't use the <code>cumsum</code> function.. </p> | <p>If I understood you correctly, you want to have discrete steps in the p-value of width 0.05 and show the cumulative sum?</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# create some random example data
df = pd.DataFrame({
'goals': np.random.poisson(3, size=1000),
'p_value': np.random.uniform(0, 1, size=1000)
})
# define binning in p-value
bin_edges = np.arange(0, 1.025, 0.05)
bin_center = 0.5 * (bin_edges[:-1] + bin_edges[1:])
bin_width = np.diff(bin_edges)
# find the p_value bin, each row belongs to
# 0 is underflow, len(edges) is overflow bin
df['bin'] = np.digitize(df['p_value'], bins=bin_edges)
# get the number of goals per p_value bin
goals_per_bin = df.groupby('bin')['goals'].sum()
print(goals_per_bin)
# not every bin might be filled, so we will use pandas index
# matching t
binned = pd.DataFrame({
'center': bin_center,
'width': bin_width,
'goals': np.zeros(len(bin_center))
}, index=np.arange(1, len(bin_edges)))
binned['goals'] = goals_per_bin
plt.step(
binned['center'],
binned['goals'],
where='mid',
)
plt.xlabel('p-value')
plt.ylabel('goals')
plt.show()
</code></pre> | python-3.x|pandas|plot | 0 |
378,242 | 51,457,942 | Regression plot is wrong (python) | <p>So my program reads MPG vs weight relationship and draws a graph of what it is suppose to look like but as you can see the graph is not looking right. </p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#read txt file
dataframe= pd.read_table('auto_data71.txt',delim_whitespace=True,names=['MPG','Cylinder','Displacement','Horsepower','Weight','acceleration','Model year','Origin','Car Name'])
dataframe.dropna(inplace=True)
#filter the un-necessary columns
X = dataframe.iloc[:,4:5].values
Y = dataframe.iloc[:,0:1].values
#scale data
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_Y= StandardScaler()
X = sc_X.fit_transform(X)
Y = sc_Y.fit_transform(Y)
#split data into train and test set
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.2)
#create model
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly_reg = PolynomialFeatures(degree=2)
poly_X = poly_reg.fit_transform(x_train)
poly_reg.fit(poly_X,y_train)
regressor2= LinearRegression()
regressor2.fit(poly_X,y_train)
#graph
result = regressor2.predict(poly_X)
plt.scatter(x_train,y_train,color='red')
plt.plot(x_train, result,color='blue')
plt.show()
</code></pre>
<p>the output is this:
As you can see the regression line does not look right. Any help will be much appreciated.</p>
<p><img src="https://i.stack.imgur.com/K8SRE.png" alt="as you can see the regression line does not look right. Any help will be much appreciated"></p>
<pre><code>#auto_data.txt(part of data...)
</code></pre>
<p>****NOTE:i am only using weight and mpg column for this code
file(mpg,cylinder,distance,horsepower,weight,acceleration,year,origin,name)</p>
<pre><code>27.0 4. 97.00 88.00 2130. 14.5 71. 3. "datsun pl510"
28.0 4. 140.0 90.00 2264. 15.5 71. 1. "chevrolet vega 2300"
25.0 4. 113.0 95.00 2228. 14.0 71. 3. "toyota corona"
25.0 4. 98.00 NA 2046. 19.0 71. 1. "ford pinto"
NA 4. 97.00 48.00 1978. 20.0 71. 2. "volkswagen super beetle 117"
19.0 6. 232.0 100.0 2634. 13.0 71. 1. "amc gremlin"
16.0 6. 225.0 105.0 3439. 15.5 71. 1. "plymouth satellite custom"
17.0 6. 250.0 100.0 3329. 15.5 71. 1. "chevrolet chevelle malibu"
19.0 6. 250.0 88.00 3302. 15.5 71. 1. "ford torino 500"
18.0 6. 232.0 100.0 3288. 15.5 71. 1. "amc matador"
14.0 8. 350.0 165.0 4209. 12.0 71. 1. "chevrolet impala"
14.0 8. 400.0 175.0 4464. 11.5 71. 1. "pontiac catalina brougham"
14.0 8. 351.0 153.0 4154. 13.5 71. 1. "ford galaxie 500"
14.0 8. 318.0 150.0 4096. 13.0 71. 1. "plymouth fury iii"
12.0 8. 383.0 180.0 4955. 11.5 71. 1. "dodge monaco (sw)"
13.0 8. 400.0 170.0 4746. 12.0 71. 1. "ford country squire (sw)"
13.0 8. 400.0 175.0 5140. 12.0 71. 1. "pontiac safari (sw)"
18.0 6. 258.0 110.0 2962. 13.5 71. 1. "amc hornet sportabout (sw)"
</code></pre> | <p>You need to <strong>sort</strong> the values before plotting.</p>
<p>DATA: <a href="https://files.fm/u/2g5dxyb4" rel="nofollow noreferrer">https://files.fm/u/2g5dxyb4</a></p>
<p><strong>Use this</strong>:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
data = pd.read_csv('data.txt', delim_whitespace=True)
data.dropna(inplace=True)
X = data['weight'].values
Y = data['mpg'].values
X = X.reshape(-1, 1)
Y = Y.reshape(-1, 1)
#scale data
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_Y= StandardScaler()
X = sc_X.fit_transform(X)
Y = sc_Y.fit_transform(Y)
#split data into train and test set
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.2)
#create model
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly_reg = PolynomialFeatures(degree=2)
poly_X = poly_reg.fit_transform(x_train)
poly_reg.fit(poly_X,y_train)
regressor2= LinearRegression()
regressor2.fit(poly_X,y_train)
#graph
result = regressor2.predict(np.sort(poly_X,axis=0))
plt.scatter(x_train,y_train,color='red')
plt.plot(np.sort(x_train, axis = 0), result,color='blue')
plt.show()
</code></pre> | python|pandas|scikit-learn | 0 |
378,243 | 51,403,468 | Keras delayed data augmentation | <p>I am trying to apply a custom image augmentation technique in Keras. I am using fit_generator and a generator to yield images. I would like to start applying the image augmentation only after say 20 epochs (So the first 20 epochs would not have any data augmentation). Unfortunately the generator does not have a notion of epochs. Any idea how to do this?</p> | <p>The easiest way to do this is train for 20 epochs with no realtime augmentation (use the Keras ImageDataGenerator with no args) and save your models using a ModelCheckpoint callback. Then reload the model and continue training with RA (use an ImageDataGenerator with the transforms of your choice).</p>
<p>If you want that behavior in one step, you can make your own version of ImageDataGenerator. You just need to make the following changes:</p>
<pre><code>def __init__(self,
batch_counter=0, # count the batches elapsed
steps_per_epoch=0, # pass steps per epoch into the custom ImageDataGenerator on init
n_epoch = 0, # count the epochs elapsed
</code></pre>
<p>Then, just modify the NumpyArrayIterator in your ImageDataGenerator to increment these variables and only call random_transform after your n_epochs have elapsed. E.g. self.image_data_generator.batch_counter += 1</p> | tensorflow|keras | 0 |
378,244 | 51,466,808 | How can I multiply column of the int numpy array to the float digit and stays in int? | <p>I have a numpy array:</p>
<pre><code> >>> b
array([[ 2, 2],
[ 6, 4],
[10, 6]])
</code></pre>
<p>I want to multiply first column by float number, and as result I need int number, because when I doing:</p>
<pre><code>>>> b[:,0] *= 2.1
</code></pre>
<p>It says:</p>
<pre><code>TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
</code></pre>
<p>I need the array that looks like:</p>
<pre><code>array([[ 4, 2],
[12, 4],
[21, 6]])
</code></pre> | <p>@Umang Gupta gave a solution to your problem. I was curious myself as to why this worked, so I'm posting what I found as additional context. FWIW this question has already been asked and answered <a href="https://stackoverflow.com/questions/38673531/multiply-numpy-int-and-float-arrays">here</a>, but that answer also doesn't really walk through what's happening as much as I would have liked, so here's my attempt:</p>
<p>Using the <code>*=</code> operator calls the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.__imul__.html#numpy.ndarray.__imul__" rel="noreferrer"><code>__imul__()</code></a> special method for in-place multiplication of Numpy ndarrays, which in turn <a href="https://github.com/numpy/numpy/blob/master/numpy/lib/mixins.py#L157" rel="noreferrer">calls</a> the universal function (ufunc) <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.multiply.html" rel="noreferrer"><code>multiply()</code></a>. </p>
<p>There are two arguments in <code>multiply()</code> which are relevant here: <code>out</code> and <code>casting</code>. </p>
<p>The <code>out</code> argument specifies the output (along with its type). In the in-place multiplication operator, <code>out</code> is set to <code>self</code>, i.e. the <code>ndarray</code> object which called the multiplication operation. In particular, <a href="https://github.com/numpy/numpy/blob/master/numpy/lib/mixins.py#L40" rel="noreferrer">the exact call</a> for <code>*=</code> looks like this:</p>
<pre><code>ufunc(self, other, out=(self,))
</code></pre>
<p>^ where <code>ufunc = multiply</code>, <code>self = b</code> (<code>ndarray</code>, type <code>int64</code>, and <code>other = 2.1</code> (scalar, type <code>float</code>)</p>
<p>The <code>casting</code> argument, however, determines the rules for what kind of data type casting is permitted as a result of an operation. <a href="https://docs.scipy.org/doc/numpy/reference/ufuncs.html#optional-keyword-arguments" rel="noreferrer">As of Numpy 1.10</a>, the default value for <code>casting</code> is <code>same_kind</code>, <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.can_cast.html#numpy.can_cast" rel="noreferrer">which means</a>: </p>
<blockquote>
<p>only safe casts or casts within a kind, like float64 to float32, are allowed </p>
</blockquote>
<p>Since our <code>ufunc</code> call didn't specify a value for the <code>casting</code> argument, the default (<code>same_kind</code>) is used - but this causes problems because we <em>have</em> specified <code>out</code> as having an <code>int64</code> dtype, which is <em>not</em> the same kind as the output of the int-by-float multiplication. With <code>same_kind</code> casting, the <code>float</code> result of the operation can't be converted to <code>int</code>. That's why we see this error. </p>
<p>We can replicate this error using <code>multiply()</code> explicitly:</p>
<pre><code>np.multiply(b, 2.1, out=b)
TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
</code></pre>
<p>It is possible to relax the <code>casting</code> requirement of <code>multiply()</code>, by setting the argument value to <code>"unsafe"</code>. Then, when <code>out</code> is also set, the output is coerced to the type of <code>out</code>, regardless of whether it's the same kind or not (if possible):</p>
<pre><code>np.multiply(b, 2.1, out=b, casting="unsafe")
# specifying int output and allowing casting to be "unsafe" allows re-conversion to int
array([[ 4, 4],
[12, 8],
[21, 12]])
</code></pre>
<p>Using the normal assignment operator to update <code>b[:,0]</code>, on the other hand, is ok. That's what @Umang Gupta's solution does.<br>
With: </p>
<pre><code>b[:,0] = b[:,0]* 2.1
</code></pre>
<p><code>*</code> calls the <code>multiply</code> ufunc, just like with <code>*=</code>. But since it isn't calling the inplace version of the operation, there's no <code>out</code> argument specified, and so no set type for the output. Then, standard typecasting allows ints to upcast to floats:</p>
<pre><code>np.multiply(b, 2.1)
# float output
array([[ 4.2, 4.2],
[ 12.6, 8.4],
[ 21. , 12.6]])
</code></pre>
<p>Then the normal assignment operator <code>=</code> takes the output of the multiplication and stores it in <code>b[:,0]</code>. Per <a href="https://docs.scipy.org/doc/numpy/user/basics.indexing.html#assigning-values-to-indexed-arrays" rel="noreferrer">the Numpy docs</a> on assigning values to indexed arrays: </p>
<blockquote>
<p>Note that assignments may result in changes if assigning higher types to lower types (like floats to ints)</p>
</blockquote>
<p>So the problem lies in <code>*=</code> operator's automatic setting of the <code>out</code> argument without changing the <code>casting</code> argument from <code>same_kind</code> to <code>unsafe</code>. (Not that this is a bug, just that this is why you are getting an error.) And the accepted solution gets around that by leveraging automatic "downcasting" properties of assignment in Numpy. Hope that helps! (Also, Numpy pros, please feel free to correct any misunderstandings on my part.)</p> | python|numpy | 7 |
378,245 | 51,292,212 | Keras model params are all "NaN"s after reloading | <p>I use transfer learning with Resnet50. I create a new model out of the pretrained model provided by Keras (the 'imagenet').</p>
<p>After training my new model, I save it as following:</p>
<pre><code># Save the Siamese Network architecture
siamese_model_json = siamese_network.to_json()
with open("saved_model/siamese_network_arch.json", "w") as json_file:
json_file.write(siamese_model_json)
# save the Siamese Network model weights
siamese_network.save_weights('saved_model/siamese_model_weights.h5')
</code></pre>
<p>And later, I reload it as following to make some predictions:</p>
<pre><code>json_file = open('saved_model/siamese_network_arch.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
siamese_network = model_from_json(loaded_model_json)
# load weights into new model
siamese_network.load_weights('saved_model/siamese_model_weights.h5')
</code></pre>
<p>Then I check if the weights look reasonable as following (from 1 of the layers):</p>
<pre><code>print("bn3d_branch2c:\n",
siamese_network.get_layer('model_1').get_layer('bn3d_branch2c').get_weights())
</code></pre>
<p>If I train my network for 1 epoch only, I see reasonable values there..</p>
<p>But if I train my model for 18 epochs (which takes 5-6 hours as I have a very slow computer), I just see NaN values as following:</p>
<pre><code>bn3d_branch2c:
[array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
...
</code></pre>
<p>What is the trick here? </p>
<p><strong>ADDENDUM 1:</strong></p>
<p>Here is how I create my model. </p>
<p>Here, I have a triplet_loss function that I will need later on.</p>
<pre><code>def triplet_loss(inputs, dist='euclidean', margin='maxplus'):
anchor, positive, negative = inputs
positive_distance = K.square(anchor - positive)
negative_distance = K.square(anchor - negative)
if dist == 'euclidean':
positive_distance = K.sqrt(K.sum(positive_distance, axis=-1, keepdims=True))
negative_distance = K.sqrt(K.sum(negative_distance, axis=-1, keepdims=True))
elif dist == 'sqeuclidean':
positive_distance = K.sum(positive_distance, axis=-1, keepdims=True)
negative_distance = K.sum(negative_distance, axis=-1, keepdims=True)
loss = positive_distance - negative_distance
if margin == 'maxplus':
loss = K.maximum(0.0, 2 + loss)
elif margin == 'softplus':
loss = K.log(1 + K.exp(loss))
returned_loss = K.mean(loss)
return returned_loss
</code></pre>
<p>And here is how I construct my model from start to end. I give the complete code to give the exact picture.</p>
<pre><code>model = ResNet50(weights='imagenet')
# Remove the last layer (Needed to later be able to create the Siamese Network model)
model.layers.pop()
# First freeze all layers of ResNet50. Transfer Learning to be applied.
for layer in model.layers:
layer.trainable = False
# All Batch Normalization layers still need to be trainable so that the "mean"
# and "standard deviation (std)" params can be updated with the new training data
model.get_layer('bn_conv1').trainable = True
model.get_layer('bn2a_branch2a').trainable = True
model.get_layer('bn2a_branch2b').trainable = True
model.get_layer('bn2a_branch2c').trainable = True
model.get_layer('bn2a_branch1').trainable = True
model.get_layer('bn2b_branch2a').trainable = True
model.get_layer('bn2b_branch2b').trainable = True
model.get_layer('bn2b_branch2c').trainable = True
model.get_layer('bn2c_branch2a').trainable = True
model.get_layer('bn2c_branch2b').trainable = True
model.get_layer('bn2c_branch2c').trainable = True
model.get_layer('bn3a_branch2a').trainable = True
model.get_layer('bn3a_branch2b').trainable = True
model.get_layer('bn3a_branch2c').trainable = True
model.get_layer('bn3a_branch1').trainable = True
model.get_layer('bn3b_branch2a').trainable = True
model.get_layer('bn3b_branch2b').trainable = True
model.get_layer('bn3b_branch2c').trainable = True
model.get_layer('bn3c_branch2a').trainable = True
model.get_layer('bn3c_branch2b').trainable = True
model.get_layer('bn3c_branch2c').trainable = True
model.get_layer('bn3d_branch2a').trainable = True
model.get_layer('bn3d_branch2b').trainable = True
model.get_layer('bn3d_branch2c').trainable = True
model.get_layer('bn4a_branch2a').trainable = True
model.get_layer('bn4a_branch2b').trainable = True
model.get_layer('bn4a_branch2c').trainable = True
model.get_layer('bn4a_branch1').trainable = True
model.get_layer('bn4b_branch2a').trainable = True
model.get_layer('bn4b_branch2b').trainable = True
model.get_layer('bn4b_branch2c').trainable = True
model.get_layer('bn4c_branch2a').trainable = True
model.get_layer('bn4c_branch2b').trainable = True
model.get_layer('bn4c_branch2c').trainable = True
model.get_layer('bn4d_branch2a').trainable = True
model.get_layer('bn4d_branch2b').trainable = True
model.get_layer('bn4d_branch2c').trainable = True
model.get_layer('bn4e_branch2a').trainable = True
model.get_layer('bn4e_branch2b').trainable = True
model.get_layer('bn4e_branch2c').trainable = True
model.get_layer('bn4f_branch2a').trainable = True
model.get_layer('bn4f_branch2b').trainable = True
model.get_layer('bn4f_branch2c').trainable = True
model.get_layer('bn5a_branch2a').trainable = True
model.get_layer('bn5a_branch2b').trainable = True
model.get_layer('bn5a_branch2c').trainable = True
model.get_layer('bn5a_branch1').trainable = True
model.get_layer('bn5b_branch2a').trainable = True
model.get_layer('bn5b_branch2b').trainable = True
model.get_layer('bn5b_branch2c').trainable = True
model.get_layer('bn5c_branch2a').trainable = True
model.get_layer('bn5c_branch2b').trainable = True
model.get_layer('bn5c_branch2c').trainable = True
# Used when compiling the siamese network
def identity_loss(y_true, y_pred):
return K.mean(y_pred - 0 * y_true)
# Create the siamese network
x = model.get_layer('flatten_1').output # layer 'flatten_1' is the last layer of the model
model_out = Dense(128, activation='relu', name='model_out')(x)
model_out = Lambda(lambda x: K.l2_normalize(x,axis=-1))(model_out)
new_model = Model(inputs=model.input, outputs=model_out)
anchor_input = Input(shape=(224, 224, 3), name='anchor_input')
pos_input = Input(shape=(224, 224, 3), name='pos_input')
neg_input = Input(shape=(224, 224, 3), name='neg_input')
encoding_anchor = new_model(anchor_input)
encoding_pos = new_model(pos_input)
encoding_neg = new_model(neg_input)
loss = Lambda(triplet_loss)([encoding_anchor, encoding_pos, encoding_neg])
siamese_network = Model(inputs = [anchor_input, pos_input, neg_input],
outputs = loss) # Note that the output of the model is the
# return value from the triplet_loss function above
siamese_network.compile(optimizer=Adam(lr=.0001), loss=identity_loss)
</code></pre>
<p>One thing to notice is that I make all batch normalization layers "trainable" so that BN related params can be updated with my training data. This creates a lot of lines but I could not find a shorter solution.</p> | <p>The solution is inspired from @Gurmeet Singh's recommendation above.</p>
<p>Seemingly, weights of trainable layers have become so big after a while during the training and all such weights are set to NaN, which made me think that I was saving and reloading my models in the wrong way but the problem was exploding gradients.</p>
<p>I saw a similar issue in github discussions too, which can be checked out here: github.com/keras-team/keras/issues/2378
At the bottom of that thread in github, it is recommended to use lower learning rates to avoid the problem.</p>
<p>In this link (<a href="https://stackoverflow.com/questions/42264567/keras-ml-library-how-to-do-weight-clipping-after-gradient-updates-tensorflow-b">Keras ML library: how to do weight clipping after gradient updates? TensorFlow backend</a>), 2 solutions are discussed:
- using the <strong>clipvalue</strong> parameter in the optimizer, which simply cuts the calculated gradient values as configured. But this is not the recommended solution to go for.(Explained in the other thread.)
- and the second thing is to use the clipnorm parameter, which simply clips calculated gradient values when their L2 norm exceeds the given value by the user.</p>
<p>I also thought about using input normalization (to avoid exploding gradients) but then figured out that it is already done in the <strong>preprocess_input(..)</strong> function.
(Check this link for details: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/preprocess_input" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/preprocess_input</a>) It is though possible to set the <strong>mode</strong> parameter to <strong>"tf"</strong> (set to <strong>"caffe"</strong> by default otherwise), which could further help (because <strong>mode="tf"</strong> setting scales pixels between -1 and 1) but I did not try it.</p>
<p>I summary, I changed 2 things when compiling my model that will be trained:</p>
<p>The line that has been changed is the following:</p>
<p>Before the change:</p>
<pre><code>siamese_network.compile(optimizer=Adam(**lr=.0001**),
loss=identity_loss)
</code></pre>
<p>After the change:</p>
<pre><code>siamese_network.compile(optimizer=Adam(**lr=.00004**, **clipnorm=1.**),
loss=identity_loss)
</code></pre>
<p>1) Used a smaller learning rate to make gradient updates a bit smaller
2) Used the clipnorm parameter to normalize calculated gradients and cut them.</p>
<p>And I trained my network again for 10 epochs. The loss decreases as desired, but more slowly now. And I do not experience any problems when saving and storing my model. (At least after 10 epochs (it takes time on my computer).)</p>
<p>Note that I set the value of <strong>clipnorm</strong> to <strong>1</strong>. This means that the L2 norm of gradients is calculated first and if the calculated normalized gradient exceeds the value of "1", the gradient is clipped. I assume this is a hyperparameter that can be optimized, that affects the time needed to train the model while helping to avoid exploding gradients problem.</p> | python|tensorflow|machine-learning|keras|transfer-learning | 0 |