Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
---|---|---|---|---|---|---|
0 | 71,036,673 | Why doesn't the for loop save each roll value in each iteration to the histogram? | <p>I am creating a for loop to inspect regression to the mean with dice rolls.
Wanted outcome would that histogram shows all the roll values that came on each iteration.</p>
<p>Why doesn't the for loop save each roll value in each iteration to the histogram?</p>
<p>Furthermore PyCharm takes forever to load if n> 20000 values therefore the code doesn't execute fully in that case.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
sums = 0
values = [1, 2, 3, 4, 5, 6]
numbers = [500, 1000, 2000, 5000, 10000, 15000, 20000, 50000, 100000]
n = np.random.choice(numbers)
for i in range(n):
roll = np.random.choice(values) + np.random.choice(values)
sums = roll + sums
h, h2 = np.histogram(sums, range(2, 14))
plt.bar(h2[:-1], h / n)
plt.title(n)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/SYj4v.png" rel="nofollow noreferrer">Current output</a></p> | <p>You are now overwriting h and h2 in every iteration. Instead, you could append the values to a list and make a histogram of the entire list:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
sums = 0
values = [1, 2, 3, 4, 5, 6]
numbers = [500, 1000, 2000, 5000, 10000, 15000, 20000, 50000, 100000]
n = np.random.choice(numbers)
all_rolls = []
for i in range(n):
roll = np.random.choice(values) + np.random.choice(values)
all_rolls.append(roll)
h, h2 = np.histogram(all_rolls, range(2, 14))
plt.bar(h2[:-1], h / n)
plt.title(n)
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/umeZi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/umeZi.png" alt="Histogram" /></a></p> | python|numpy|matplotlib | 1 |
1 | 51,636,753 | How to sort strings with numbers in Pandas? | <p>I have a Python Pandas Dataframe, in which a column named <code>status</code> contains three kinds of possible values: <code>ok</code>, <code>must read x more books</code>, <code>does not read any books yet</code>, where <code>x</code> is an integer higher than <code>0</code>.</p>
<p>I want to sort <code>status</code> values according to the order above. </p>
<p>Example:</p>
<pre><code> name status
0 Paul ok
1 Jean must read 1 more books
2 Robert must read 2 more books
3 John does not read any book yet
</code></pre>
<p>I've found some interesting hints, using <a href="https://stackoverflow.com/questions/13838405/custom-sorting-in-pandas-dataframe">Pandas Categorical</a> and <a href="https://stackoverflow.com/questions/23279238/custom-sorting-with-pandas">map</a> but I don't know how to deal with variable values modifying strings.</p>
<p>How can I achieve that?</p> | <p>Use:</p>
<pre><code>a = df['status'].str.extract('(\d+)', expand=False).astype(float)
d = {'ok': a.max() + 1, 'does not read any book yet':-1}
df1 = df.iloc[(-df['status'].map(d).fillna(a)).argsort()]
print (df1)
name status
0 Paul ok
2 Robert must read 2 more books
1 Jean must read 1 more books
3 John does not read any book yet
</code></pre>
<p><strong>Explanation</strong>:</p>
<ol>
<li>First <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="noreferrer"><code>extract</code></a> integers by <code>regex</code> <code>\d+</code></li>
<li>Then dynamically create <code>dictionary</code> for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="noreferrer"><code>map</code></a> non numeric values</li>
<li>Replace <code>NaN</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="noreferrer"><code>fillna</code></a> for <code>numeric Series</code></li>
<li>Get positions by <a href="https://stackoverflow.com/a/16486305/2901002">argsort</a></li>
<li>Select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="noreferrer"><code>iloc</code></a> for sorted values</li>
</ol> | python|python-3.x|pandas|sorting | 10 |
2 | 51,569,238 | Bar Plot with recent dates left where date is datetime index | <p>I tried to sort the dataframe by datetime index and then plot the graph but no change still it was showing where latest dates like 2017, 2018 were in right and 2008, 2009 were left.</p>
<p><a href="https://i.stack.imgur.com/BP8yy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BP8yy.png" alt="enter image description here"></a></p>
<p>I wanted the latest year to come left and old to the right.
This was the dataframe earlier.</p>
<pre><code> Title
Date
2001-01-01 0
2002-01-01 9
2003-01-01 11
2004-01-01 17
2005-01-01 23
2006-01-01 25
2007-01-01 51
2008-01-01 55
2009-01-01 120
2010-01-01 101
2011-01-01 95
2012-01-01 118
2013-01-01 75
2014-01-01 75
2015-01-01 3
2016-01-01 35
2017-01-01 75
2018-01-01 55
</code></pre>
<p>Ignore the values.
Then I sort the above dataframe by index, and then plot but still no change in plots</p>
<pre><code>df.sort_index(ascending=False, inplace=True)
</code></pre> | <p>I guess you've not change your index to year. This is why it is not working.you can do so by:</p>
<pre><code>df.index = pd.to_datetime(df.Date).dt.year
#then sort index in descending order
df.sort_index(ascending = False , inplace = True)
df.plot.bar()
</code></pre> | python|pandas|matplotlib|bar-chart|datetimeindex | 0 |
3 | 51,647,773 | Using matplotlib to obtain an overlaid histogram | <p>I am new to python and I'm trying to plot an overlaid histogram for a manipulated data set from <code>Kaggle</code>. I tried doing it with <code>matplotlib</code>. This is a dataset that shows the history of gun violence in USA in recent years. I have selected only few columns for <code>EDA</code>. </p>
<pre><code> import pandas as pd
data_set = pd.read_csv("C:/Users/Lenovo/Documents/R related
Topics/Assignment/Assignment_day2/04 Assignment/GunViolence.csv")
state_wise_crime = data_set[['date', 'state', 'n_killed', 'n_injured']]
date_value = pd.to_datetime(state_wise_crime['date'])
import datetime
state_wise_crime['Month']= date_value.dt.month
state_wise_crime.drop('date', axis = 1)
no_of_killed = state_wise_crime.groupby(['state','Year'])
['n_killed','n_injured'].sum()
no_of_killed = state_wise_crime.groupby(['state','Year']
['n_killed','n_injured'].sum()
</code></pre>
<p><a href="https://i.stack.imgur.com/s0wSI.png" rel="nofollow noreferrer">I want an overlaid histogram that shows the no. of people killed and no.of people injured with the different states on the x-axis</a></p> | <p>Welcome to Stack Overflow! From next time, please post your data like in below format (not a link or an image) to make us easier to work on the problem. Also, if you ask about a graph output, showing the contents of desired graph (even with hand drawing) would be very helpful :)</p>
<hr>
<p><code>df</code></p>
<pre><code> state Year n_killed n_injured
0 Alabama 2013 9 3
1 Alabama 2014 591 325
2 Alabama 2015 562 385
3 Alabama 2016 761 488
4 Alabama 2017 856 544
5 Alabama 2018 219 135
6 Alaska 2014 49 29
7 Alaska 2015 84 70
8 Alaska 2016 103 88
9 Alaska 2017 70 69
</code></pre>
<p>As I commented in your original post, a bar plot would be more appropriate than histogram in this case since your purpose appears to be visualizing the summary statistics (sum) of each year with state-wise comparison. As far as I know, the easiest option is to use <a href="https://github.com/mwaskom/seaborn" rel="nofollow noreferrer">Seaborn</a>. It depends on how you want to show the data, but below is one example. The code is as simple as below.</p>
<pre><code>import seaborn as sns
sns.barplot(x='Year', y='n_killed', hue='state', data=df)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/MDiOa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MDiOa.png" alt="enter image description here"></a></p>
<p>Hope this helps.</p> | python|pandas|matplotlib|kaggle | 0 |
4 | 51,878,397 | Local scripts conflict with builtin modules when loading numpy | <p>There are many posts about relative/absolute imports issues, and most of them are about Python 2 and/or importing submodules. <strong>This is not my case</strong>: </p>
<ul>
<li>I am using Python 3, so absolute import is the default;</li>
<li>(I have also reproduced this issue with Python 2);</li>
<li>I am not trying to import a submodule from within another submodule, or any other complicated situation. I am just trying to <code>import numpy</code> in a script.</li>
</ul>
<p>My problem is simple:</p>
<pre><code>.
└── foo
├── a.py
└── math.py
1 directory, 2 files
</code></pre>
<p>where <code>a.py</code> just contains <code>import nupmy</code>, and <code>math.py</code> contains <code>x++</code> (intentionally invalid).</p>
<p>In that case, running <code>python3 foo/a.py</code> causes an error, due to NumPy seemingly not being able to import the standard <code>math</code> module:</p>
<pre><code>Traceback (most recent call last):
File "foo/a.py", line 1, in <module>
import numpy
File "/path/to/Anaconda3/lib/python3.6/site-packages/numpy/__init__.py", line 158, in <module>
from . import add_newdocs
File "/path/to/Anaconda3/lib/python3.6/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/path/to/Anaconda3/lib/python3.6/site-packages/numpy/lib/__init__.py", line 3, in <module>
import math
File "/private/tmp/test-import/foo/math.py", line 1
x++
^
SyntaxError: invalid syntax
</code></pre>
<p>I am relatively inexperienced with Python, but this looks like a bug to me. I thought statements like <code>import math</code> in Python 3 behaved as absolute imports; how can a local file conflict with a standard module? Am I doing something wrong?</p>
<p><strong>To clarify, what I find surprising is that NumPy is unable to load the standard math module with <code>import math</code>, because I have a file in my local folder named <code>math.py</code>.</strong> Note that I never try to import that module myself.</p>
<hr>
<p><strong>EDIT</strong></p>
<p><strike>This seems to be an issue specific to <code>conda</code> (reproduced with both Anaconda and Miniconda). I am using Anaconda 5.2.0 (on OSX 10.13.6), and people in comments have been able to reproduce with different versions of python/anaconda, and different systems.</strike></p>
<p>I was able to reproduce this issue with:</p>
<ul>
<li>Anaconda3 v5.2.0, using python 3.4, 3.5, 3.6 and 3.7, within a <code>conda</code> environment, or simply using the default binaries (ie <code>/path/to/anaconda3/bin</code>).</li>
<li>Miniconda2, and Miniconda3 (manual install of <code>numpy</code> required), again either within or outside a <code>conda</code> environment.</li>
<li>A clean Homebrew install <code>brew install python</code>.</li>
</ul>
<p>In all cases, it looks like the builtin-modules might be incomplete:</p>
<pre><code>> python3 -c "import sys; print(sys.builtin_module_names)"
('_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_symtable', '_thread', '_tracemalloc', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time', 'xxsubtype', 'zipimport')
> python2 -c "import sys; print sys.builtin_module_names"
('__builtin__', '__main__', '_ast', '_codecs', '_sre', '_symtable', '_warnings', '_weakref', 'errno', 'exceptions', 'gc', 'imp', 'marshal', 'posix', 'pwd', 'signal', 'sys', 'thread', 'xxsubtype', 'zipimport')
</code></pre>
<hr>
<p><strong>REPRODUCE THIS ISSUE</strong></p>
<p>Make sure you have a version of Python that can import numpy. Open a terminal and type:</p>
<pre><code>D=$(mktemp -d) # temporary folder
pushd "$D" # move there
mkdir foo # create subfolder
echo 'import numpy' >| foo/a.py # script a.py
echo 'x++' >| foo/math.py # script math.py (invalid)
python foo/a.py # run a.py
popd # leave temp folder
</code></pre> | <p>"Absolute import" does not mean "standard library import". It means that <code>import math</code> always tries to import the <code>math</code> module, rather than the old behavior of trying <code>currentpackage.math</code> first if the import occurs inside a package. It does <em>not</em> mean that Python will skip non-stdlib entries on <code>sys.path</code> when figuring out where the <code>math</code> module is. In your situation, by the rules of the Python import system, your <code>math.py</code> is the <code>math</code> module.</p>
<hr>
<p>The tutorial link you found with the line</p>
<blockquote>
<p>When a module named spam is imported, the interpreter first searches for a built-in module with that name.</p>
</blockquote>
<p>is referring to modules that are <em>directly compiled into the Python executable</em>, like <code>sys</code>. Such modules say <code>built-in</code> in their <code>repr</code>:</p>
<pre><code>>>> sys
<module 'sys' (built-in)>
</code></pre>
<p>You can see the names of all such modules in <a href="https://docs.python.org/3/library/sys.html#sys.builtin_module_names" rel="nofollow noreferrer"><code>sys.builtin_module_names</code></a>. For me, those names are</p>
<pre><code>>>> sys.builtin_module_names
('_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_symtable', '_thread', '_tracemalloc', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time', 'xxsubtype', 'zipimport')
</code></pre>
<p><code>math</code> isn't built-in in that sense.</p> | python|numpy|anaconda|conda | 0 |
5 | 36,076,303 | How do I apply this function to each group in my DataFrame | <p>Relatively new to Pandas, coming from an R background. I have a DataFrame like so</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'ProductID':[0,5,9,3,2,8], 'StoreID':[0,0,0,1,1,2]})
ProductID StoreID
0 0 0
1 5 0
2 9 0
3 3 1
4 2 1
5 8 2
</code></pre>
<p>For each StoreID, how do I label the rows of <code>df</code> as 1, 2, ... based on the ordered ProductID? Then, how do I normalize those ranks? In other words, How do I achieve the following</p>
<pre><code>df['Product_Rank_Index'] = np.array([1,2,3,2,1,1])
df['Product_Rank_Index_Normalized'] = np.array([1/3, 2/3, 3/3, 2/2, 1/2, 1/1])
ProductID StoreID Product_Rank_Index Product_Rank_Index_Normalized
0 0 0 1 0.333333
1 5 0 2 0.666667
2 9 0 3 1.000000
3 3 1 2 1.000000
4 2 1 1 0.500000
5 8 2 1 1.000000
</code></pre>
<p>I've tried doing some things with <code>df.groupby('StoreID')</code> but couldn't get anything to work.</p> | <p>Figured it out thanks to <a href="https://stackoverflow.com/a/26721325/2146894">this</a> answer.</p>
<pre><code>df.groupby('StoreID').ProductID.apply(lambda x: x.rank()/len(x))
</code></pre> | python|pandas|dataframe | 2 |
6 | 35,924,126 | Appending columns during groupby-apply operations | <h3>Context</h3>
<p>I have several groups of data (defined by 3 columns w/i the dataframe) and would like perform a linear fit and each group and then append the estimate values (with lower + upper bounds of the fit).</p>
<h3>Problem</h3>
<p>After performing the operation, I get an error related to the shapes of the final vs original dataframes</p>
<h3>Example that demonstrates the problem:</h3>
<pre><code>from io import StringIO # modern python
#from StringIO import StringIO # old python
import numpy
import pandas
def fake_model(group, formula):
# add the results to the group
modeled = group.assign(
fit=numpy.random.normal(size=group.shape[0]),
ci_lower=numpy.random.normal(size=group.shape[0]),
ci_upper=numpy.random.normal(size=group.shape[0])
)
return modeled
raw_csv = StringIO("""\
location,days,era,chemical,conc
MW-A,2415,modern,"Chem1",5.4
MW-A,7536,modern,"Chem1",0.21
MW-A,7741,modern,"Chem1",0.15
MW-A,2415,modern,"Chem2",33.0
MW-A,2446,modern,"Chem2",0.26
MW-A,3402,modern,"Chem2",0.18
MW-A,3626,modern,"Chem2",0.26
MW-A,7536,modern,"Chem2",0.32
MW-A,7741,modern,"Chem2",0.24
""")
data = pandas.read_csv(raw_csv)
modeled = (
data.groupby(by=['location', 'era', 'chemical'])
.apply(fake_model, formula='conc ~ days')
.reset_index(drop=True)
)
</code></pre>
<p>That raises a very long traceback, the crux of which is:</p>
<pre><code>[snip]
C:\Miniconda3\envs\puente\lib\site-packages\pandas\core\internals.py in construction_error(tot_items, block_shape, axes, e)
3880 raise e
3881 raise ValueError("Shape of passed values is {0}, indices imply {1}".format(
-> 3882 passed,implied))
3883
3884
ValueError: Shape of passed values is (8, 9), indices imply (8, 6)
</code></pre>
<p>I understand that I added three columns, hence a shape of (8, 9) vs (8, 6).</p>
<p>What I don't understand is that if I inspect the dataframe subgroup in the slightest way, the above error is <em>not</em> raised:</p>
<pre><code>def fake_model2(group, formula):
_ = group.name
return fake_model(group, formula)
modeled = (
data.groupby(by=['location', 'era', 'chemical'])
.apply(fake_model2, formula='conc ~ days')
.reset_index(drop=True)
)
print(modeled)
</code></pre>
<p>Which produces:</p>
<pre><code> location days era chemical conc ci_lower ci_upper fit
0 MW-A 2415 modern Chem1 5.40 -0.466833 -0.599039 -1.143867
1 MW-A 7536 modern Chem1 0.21 -1.790619 -0.532233 -1.356336
2 MW-A 7741 modern Chem1 0.15 1.892256 -0.405768 -0.718673
3 MW-A 2415 modern Chem2 33.00 0.428811 0.259244 -1.259238
4 MW-A 2446 modern Chem2 0.26 -1.616517 -0.955750 -0.727216
5 MW-A 3402 modern Chem2 0.18 -0.300749 0.341106 0.602332
6 MW-A 3626 modern Chem2 0.26 -0.232240 1.845240 1.340124
7 MW-A 7536 modern Chem2 0.32 -0.416087 -0.521973 -1.477748
8 MW-A 7741 modern Chem2 0.24 0.958202 0.634742 0.542667
</code></pre>
<h3>Question</h3>
<p>My work-around feels far too hacky to use in any real-world application. Is there a better way to apply my model and include the best-fit estimates to each group within the larger dataframe?</p> | <p>Yay, a non-hacky workaround exists</p>
<pre><code>In [18]: gr = data.groupby(['location', 'era', 'chemical'], group_keys=False)
In [19]: gr.apply(fake_model, formula='')
Out[19]:
location days era chemical conc ci_lower ci_upper fit
0 MW-A 2415 modern Chem1 5.40 -0.105610 -0.056310 1.344210
1 MW-A 7536 modern Chem1 0.21 0.574092 1.305544 0.411960
2 MW-A 7741 modern Chem1 0.15 -0.073439 0.140920 -0.679837
3 MW-A 2415 modern Chem2 33.00 1.959547 0.382794 0.544158
4 MW-A 2446 modern Chem2 0.26 0.484376 0.400111 -0.450741
5 MW-A 3402 modern Chem2 0.18 -0.422490 0.323525 0.520716
6 MW-A 3626 modern Chem2 0.26 -0.093855 -1.487398 0.222687
7 MW-A 7536 modern Chem2 0.32 0.124983 -0.484532 -1.162127
8 MW-A 7741 modern Chem2 0.24 -1.622693 0.949825 -1.049279
</code></pre>
<p>That actually saves you a <code>.reset_index</code> too :)</p>
<p><code>group_keys</code> was the culprit behind the error.
The maybe bug in pandas come from a regular <code>concat</code> of each group. With <code>group_keys=True</code> thats</p>
<pre><code>[('MW-A', 'modern', 'Chem1'), ('MW-A', 'modern', 'Chem2')]
</code></pre>
<p>which pandas wasn't expecting. This smells like a bug in pandas, but haven't dug more to confirm.</p> | python|pandas | 4 |
7 | 37,173,706 | Handling value rollover in data frame | <p>I'm processing a dataframe that contains a column that consists of an error count. The problem I'm having is the counter rolls over after 64k. Additionally, on long runs the rollover occurs multiple times. I need a method to correct these overflows and get an accurate count.</p> | <p>I'm not sure that it always work correctly, but let's try:</p>
<pre><code># groups
g = df.groupby((df['count'].diff() < 0).cumsum())
# mapping cumulative summand
mp = df.groupby((df['count'].diff() < 0).cumsum(), as_index=False).max().shift(1).fillna(0)['count']
# math
for grp, chunk in g:
df['count'] += (df['count'].diff() < 0).cumsum().map(mp)
</code></pre>
<p>Original DF:</p>
<pre><code>In [416]: df
Out[416]:
count
0 0
1 1
2 2
3 3
4 4
5 5
6 0
7 1
8 2
9 3
10 4
11 0
12 1
13 2
14 3
15 4
16 5
17 6
18 7
19 8
</code></pre>
<p>Result:</p>
<pre><code>In [414]: df
Out[414]:
count
0 0.0
1 1.0
2 2.0
3 3.0
4 4.0
5 5.0
6 5.0
7 6.0
8 7.0
9 8.0
10 9.0
11 9.0
12 10.0
13 11.0
14 12.0
15 13.0
16 14.0
17 15.0
18 16.0
19 17.0
</code></pre>
<p>Explanation:</p>
<p>helper for grouping (monotonically increasing groups):</p>
<pre><code>In [418]: (df['count'].diff() < 0).cumsum()
Out[418]:
0 0
1 0
2 0
3 0
4 0
5 0
6 1
7 1
8 1
9 1
10 1
11 2
12 2
13 2
14 2
15 2
16 2
17 2
18 2
19 2
Name: count, dtype: int32
</code></pre>
<p>Summand for each group:</p>
<pre><code>In [420]: df.groupby((df['count'].diff() < 0).cumsum(), as_index=False).max().shift(1).fillna(0)['count']
Out[420]:
0 0.0
1 5.0
2 4.0
Name: count, dtype: float64
</code></pre>
<p>already mapped summands - they will be added <code>N</code> times (where <code>N</code> is number of groups - <code>3</code> for this example):</p>
<pre><code>In [421]: (df['count'].diff() < 0).cumsum().map(mp)
Out[421]:
0 0.0
1 0.0
2 0.0
3 0.0
4 0.0
5 0.0
6 5.0
7 5.0
8 5.0
9 5.0
10 5.0
11 4.0
12 4.0
13 4.0
14 4.0
15 4.0
16 4.0
17 4.0
18 4.0
19 4.0
Name: count, dtype: float64
</code></pre>
<p>setup test DF:</p>
<pre><code>df = pd.DataFrame({'count': np.arange(20)})
df.ix[6:10, 'count'] = range(5)
df.ix[11:19, 'count'] = range(9)
</code></pre> | python|pandas | 1 |
8 | 41,818,379 | Why do I have to import this from numpy if I am just referencing it from the numpy module | <p>Aloha!</p>
<p>I have two blocks of code, one that will work and one that will not. The only difference is a commented line of code for a numpy module I don't use. Why am I required to import that model when I never reference "npm"?</p>
<p>This command works:</p>
<pre><code>import numpy as np
import numpy.matlib as npm
V = np.array([[1,2,3],[4,5,6],[7,8,9]])
P1 = np.matlib.identity(V.shape[1], dtype=int)
P1
</code></pre>
<p>This command doesn't work:</p>
<pre><code>import numpy as np
#import numpy.matlib as npm
V = np.array([[1,2,3],[4,5,6],[7,8,9]])
P1 = np.matlib.identity(V.shape[1], dtype=int)
P1
</code></pre>
<p>The above gets this error:</p>
<pre><code>AttributeError: 'module' object has no attribute 'matlib'
</code></pre>
<p>Thanks in advance!</p> | <h2>Short Answer</h2>
<p>This is because <code>numpy.matlib</code> is an optional sub-package of <code>numpy</code> that must be imported separately. </p>
<p>The reason for this feature may be:</p>
<ul>
<li>In particular for <code>numpy</code>, the <code>numpy.matlib</code> sub-module redefines <code>numpy</code>'s functions to return matrices instead of ndarrays, an optional feature that many may not want</li>
<li>More generally, to load the parent module without loading a potentially slow-to-load module which many users may not often need</li>
<li>Possibly, namespace separation</li>
</ul>
<p>When you import just <code>numpy</code> without the sub-package <code>matlib</code>, then Python will be looking for <code>.matlib</code> as an attribute of the <code>numpy</code> package. This attribute has not been assigned to <code>numpy</code> without importing <code>numpy.matlib</code> (see discussion below)</p>
<h2>Sub-Modules and Binding</h2>
<p>If you're wondering why <code>np.matlib.identity</code> works without having to use the keyword <code>npm</code>, that's because when you import the sub-module <code>matlib</code>, the parent module <code>numpy</code> (named <code>np</code> in your case) will be given an attribute <code>matlib</code> which is bound to the sub-module. This only works if you first define <code>numpy</code>.</p>
<p>From the <a href="https://docs.python.org/3.6/reference/import.html#submodules" rel="noreferrer">reference</a>:</p>
<blockquote>
<p>When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in <strong>import</strong>()) a binding is placed in the parent module’s namespace to the submodule object.</p>
</blockquote>
<h2><strong>Importing and __init__.py</strong></h2>
<p>The choice of what to import is determined in the modules' respective <code>__init__.py</code> files in the module directory. You can use the <code>dir()</code> function to see what names the respective modules define.</p>
<pre><code>>> import numpy
>> 'matlib' in dir(numpy)
# False
>> import numpy.matlib
>> 'matlib' in dir(numpy)
# True
</code></pre>
<p>Alternatively, if you look directly at the <a href="https://github.com/numpy/numpy/blob/master/numpy/__init__.py" rel="noreferrer"><code>__init__.py</code> file for <code>numpy</code></a> you'll see there's no import for <code>matlib</code>.</p>
<h2>Namespace across Sub-Modules</h2>
<p>If you're wondering how the namespace is copied over <em>smoothly</em>;</p>
<p>The <a href="https://github.com/numpy/numpy/blob/master/numpy/matlib.py" rel="noreferrer"><code>matlib</code> source code</a> runs this command to copy over the <code>numpy</code> namespace:</p>
<pre><code>import numpy as np # (1)
...
# need * as we're copying the numpy namespace
from numpy import * # (2)
...
__all__ = np.__all__[:] # copy numpy namespace # (3)
</code></pre>
<p>Line (2), <code>from numpy import *</code> is particularly important. Because of this, you'll notice that if you just import <code>numpy.matlib</code> you can still use all of <code>numpy</code> modules without having to import <code>numpy</code>! </p>
<p>Without line (2), the namespace copy in line (3) would only be attached to the sub-module. Interestingly, you can still do a funny command like this because of line (3).</p>
<pre><code>import numpy.matlib
numpy.matlib.np.matlib.np.array([1,1])
</code></pre>
<p>This is because the <code>np.__all__</code> is attached to the <code>np</code> of <code>numpy.matlib</code> (which was imported via line (1)). </p> | python|numpy | 22 |
9 | 7,776,679 | append two data frame with pandas | <p>When I try to merge two dataframes by rows doing:</p>
<pre><code>bigdata = data1.append(data2)
</code></pre>
<p>I get the following error:</p>
<blockquote>
<pre><code>Exception: Index cannot contain duplicate values!
</code></pre>
</blockquote>
<p>The index of the first data frame starts from 0 to 38 and the second one from 0 to 48. I didn't understand that I have to modify the index of one of the data frame before merging, but I don't know how to.</p>
<p>Thank you.</p>
<p>These are the two dataframes:</p>
<p><code>data1</code>:</p>
<pre><code> meta particle ratio area type
0 2 part10 1.348 0.8365 touching
1 2 part18 1.558 0.8244 single
2 2 part2 1.893 0.894 single
3 2 part37 0.6695 1.005 single
....clip...
36 2 part23 1.051 0.8781 single
37 2 part3 80.54 0.9714 nuclei
38 2 part34 1.071 0.9337 single
</code></pre>
<p><code>data2</code>:</p>
<pre><code> meta particle ratio area type
0 3 part10 0.4756 1.025 single
1 3 part18 0.04387 1.232 dusts
2 3 part2 1.132 0.8927 single
...clip...
46 3 part46 13.71 1.001 nuclei
47 3 part3 0.7439 0.9038 single
48 3 part34 0.4349 0.9956 single
</code></pre>
<p>the first column is the index</p> | <p>The <code>append</code> function has an optional argument <code>ignore_index</code> which you should use here to join the records together, since the index isn't meaningful for your application.</p> | python|pandas | 44 |
10 | 37,634,786 | Using first row in Pandas groupby dataframe to calculate cumulative difference | <p>I have the following grouped dataframe based on daily data</p>
<pre><code>Studentid Year Month BookLevel
JSmith 2015 12 1.4
2016 1 1.6
2 1.8
3 1.2
4 2.0
MBrown 2016 1 3.0
2 3.2
3 3.6
</code></pre>
<p>I want to calculate the difference from the starting point in BookLevel for each Studentid. The current BookLevel is a .max calculation from the GroupBy to get the highest bookLevel for each month for each student</p>
<p>What I am looking for is something like this:</p>
<pre><code> Studentid Year Month BookLevel Progress Since Start
JSmith 2015 12 1.4 0 (or NAN)
2016 1 1.6 .2
2 1.8 .4
3 1.2 -.2
4 2.0 .6
2016 1 3.0 0 (or NAN)
MBrown 2 3.2 .2
3 3.6 .6
</code></pre>
<p>I'm new to Python/Pandas and have tried a number of things and nothing comes close.</p> | <p>OK, this should work, if we <code>groupby</code> on the first level and subtract BookLevel from the series returned by calling <code>transform</code> with <code>first</code> then we can add this as the new desired column:</p>
<pre><code>In [47]:
df['ProgressSinceStart'] = df['BookLevel'] - df.groupby(level='Studentid')['BookLevel'].transform('first')
df
Out[47]:
BookLevel ProgressSinceStart
Studentid Year Month
JSmith 2015 12 1.4 0.0
2016 1 1.6 0.2
2 1.8 0.4
3 1.2 -0.2
4 2.0 0.6
MBrown 2016 1 3.0 0.0
2 3.2 0.2
3 3.6 0.6
</code></pre> | python|pandas | 8 |
11 | 37,971,322 | Column Order in Pandas Dataframe from dict of dict | <p>I am creating a pandas dataframe from a dictionary of dict in the following way :</p>
<pre><code>df = pd.DataFrame.from_dict(stats).transpose()
</code></pre>
<p>I want the columns in a particular order but cant seem to figure out how to do so. I have tried this:</p>
<pre><code>df = pd.DataFrame(columns=['c1','c2','c3']).from_dict(stats).transpose()
</code></pre>
<p>but the final output is always <code>c3, c2, c1</code>. Any ideas ?</p> | <p>You could do:</p>
<pre><code>df = pd.DataFrame.from_dict(stats).transpose().loc[:, ['c1','c2','c3']]
</code></pre>
<p>or just </p>
<pre><code>df = pd.DataFrame.from_dict(stats).transpose()[['c1','c2','c3']]
</code></pre> | python|dictionary|pandas|dataframe | 2 |
12 | 31,521,475 | Vectorization on nested loop | <p>I need to vectorize the following program : </p>
<pre><code>y = np.empty((100, 100, 3))
x = np.empty((300,))
for i in xrange(y.shape[0]):
for j in xrange(y.shape[1]):
y[i, j, 0] = x[y[i, j, 0]]
</code></pre>
<p>Of course, in my example, we suppose that y[:, :, :]<=299
Vectorization, as far as I know, can't simply work here as we are using the native python indexing on lists ...</p>
<p>I've heard of <code>np.apply_along_axis</code>, but it doesn't work on this special case, or may I missed something ?</p>
<p>Thank you very much for any help.</p> | <p><code>np.apply_along_axis</code> could work, but it's overkill.</p>
<p>First, there's a problem in your nested loop approach. <code>np.empty</code>, used to define <code>y</code>, returns an array of <code>np.float</code> values, which cannot be used to index an array. To take care of this, you have to cast the array as integers, e.g. <code>y = np.empty((100, 100, 3)).astype(np.int)</code>.</p>
<p>Once you do that, you can index using <code>y</code>, as follows:</p>
<pre><code>y = np.empty((100, 100, 3)).astype(np.uint8)
x = np.empty((300,))
y[:,:,0] = x[y[:,:,0]]
</code></pre>
<p>Of course, <code>y</code> is all 0's, so it's not quite clear what this accomplishes.</p> | python|numpy|vectorization|nested-loops | 2 |
13 | 31,296,285 | Converting pandas dataframe to numeric; seaborn can't plot | <p>I'm trying to create some charts using weather data, pandas, and seaborn. I'm having trouble using lmplot (or any other seaborn plot function for that matter), though. I'm being told it can't concatenate str and float objects, but I used convert_objects(convert_numeric=True) beforehand, so I'm not sure what the issue is, and when I just print the dataframe I don't see anything wrong, per se.</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns
new.convert_objects(convert_numeric=True)
sns.lmplot("AvgSpeed", "Max5Speed", new)
</code></pre>
<p>Some of the examples of unwanted placeholder characters that I saw in the few non-numeric spaces just glancing through the dataset were "M", " ", "-", "null", and some other random strings. Would any of these cause a problem for convert_objects? Does seaborn know to ignore NaN? I don't know what's wrong. Thanks for the help.</p> | <p>You need to assign the result to itself:</p>
<pre><code>new = new.convert_objects(convert_numeric=True)
</code></pre>
<p>See the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html#pandas.DataFrame.convert_objects" rel="nofollow noreferrer">docs</a></p>
<p><code>convert_objects</code> is now deprecated as of version <code>0.21.0</code>, you have to use <code>to_numeric</code>:</p>
<pre><code>new = new.convert_objects()
</code></pre>
<p>if you have multiple columns:</p>
<pre><code>new = new.apply(pd.to_numeric)
</code></pre> | python|numpy|pandas|plot|seaborn | 2 |
14 | 64,531,149 | Repeating Data and Incorrect Names in Pandas DataFrame count Function Results | <p>I have a question about the Pandas DataFrame <code>count</code> function.</p>
<p>I'm working on the following code:</p>
<pre><code>d = {'c1': [1, 1, 1, 1, 1], 'c2': [1, 1, 1, 1, 1], 'c3': [1, 1, 1, 1, 1], 'Animal': ["Cat", "Cat", "Dog", "Cat", "Dog"]}
import pandas as pd
df = pd.DataFrame(data=d)
</code></pre>
<p>So I end up with <code>DataFrame</code> <code>df</code>, which contains the following:</p>
<pre><code> c1 c2 c3 Animal
0 1 1 1 Cat
1 1 1 1 Cat
2 1 1 1 Dog
3 1 1 1 Cat
4 1 1 1 Dog
</code></pre>
<p>Columns <code>c1</code>, <code>c2</code>, and <code>c3</code> contain information about my Animal collection which is not relevant to this question. My goal is to count the number of animals by species, i.e., the contents of the <code>Animal</code> column.</p>
<p>When I run:</p>
<pre><code>df.groupby("Animal").count()
</code></pre>
<p>the result is a <code>DataFrame</code> that contains:</p>
<pre><code> c1 c2 c3
Animal
Cat 3 3 3
Dog 2 2 2
</code></pre>
<p>As you can see, the desired result, counting the number times <code>Cat</code> and <code>Dog</code> appear in column <code>Aninal</code> is correctly computed. However, this result is a bit unsatisfying to me for the following reasons:</p>
<ol>
<li>The counts of <code>Cat</code> and <code>Dog</code> are each repeated three times in the output, one for each column header <code>c1</code>, <code>c2</code>, and <code>c3</code>.</li>
<li>The headers of the columns in this resulting <code>DataFrame</code> are really wrong: the entries are not <code>c1</code>, <code>c2</code>, or <code>c3</code> items anymore (those could be heights, weights, etc. for example), but rather animal species <strong>counts</strong>. To me this is a problem, since it is easy for client code (for example, code that uses a function that I write returning this <code>DataFrame</code>) to misinterpret these as entries instead of counts.</li>
</ol>
<p>My questions are:</p>
<ol>
<li>Why is the <code>count</code> function implemented this way, with repeating data and unchanged column headers?</li>
<li>Is it ever possible for each column to be different in a given row in the result of <code>count</code>?</li>
<li>Is there are cleaner way to do this in Pandas that addresses my two concerns listed above?</li>
</ol>
<p>I realize the following code will partially address these problems:</p>
<pre><code>df.groupby("Animal").count()['c1']
</code></pre>
<p>which results in a Series with the contents:</p>
<pre><code>Animal
Cat 3
Dog 2
Name: c1, dtype: int64
</code></pre>
<p>But this still isn't really what I'm looking for, since:</p>
<ol>
<li>It's inelegant, what's the logic of filtering on <code>c1</code> (or <code>c2</code> or <code>c3</code>, which would result in the same Series except the name)?</li>
<li>The name (analogous to the argument with the column header above) is still <code>c1</code>, which is misleading and inelegant.</li>
</ol>
<p>I realize I can rename the Series as follows:</p>
<pre><code>df.groupby("Animal").count()['c1'].rename("animal_count")
</code></pre>
<p>which results in the following Series:</p>
<pre><code>Animal
Cat 3
Dog 2
Name: animal_count, dtype: int64
</code></pre>
<p>That's a satisfactory result; it does not repeat data and is reasonably named, though I would have preferred a DataFrame at this point (I realize I could covert it). However, the code I used to get this,</p>
<pre><code>df.groupby("Animal").count()['c1'].rename("animal_count")
</code></pre>
<p>is very unsatisfying for elegance and length.</p>
<p>Another possible solution I've found is:</p>
<pre><code>df.groupby("Animal").size()
</code></pre>
<p>which results in:</p>
<pre><code>Animal
Cat 3
Dog 2
dtype: int64
</code></pre>
<p>however it's not clear to me if this is coincidently correct or if <code>size</code> and <code>count</code> really do the same thing. If so, why are both implemented in Pandas?</p>
<p>Is there a better way to do this in Pandas?</p>
<p>Thanks to everyone for your input!</p> | <p>The count function counts (for each column as you've noted) the number of non-na / non-empty cells. In general, this could differ for each column if they have different missing values. After a groupby though, I don't think this would ever be the case.</p>
<p>Like you mentioned though, I believe .size() is the function you want to just get the size of each grouping. I think this should also exist on a normal DataFrame, but it looks like it's a property not a function there (since it just returns a single number of rows; its not a mapping to apply to each group)</p> | python|pandas|dataframe|count | 0 |
15 | 64,286,384 | How to count number of unique values in pandas while each cell includes list | <p>I have a data frame like this:</p>
<p>import pandas as pd
import numpy as np</p>
<pre><code>Out[10]:
samples subject trial_num
0 [0 2 2 1 1
1 [3 3 0 1 2
2 [1 1 1 1 3
3 [0 1 2 2 1
4 [4 5 6 2 2
5 [0 8 8 2 3
</code></pre>
<p>I want to have the output like this:</p>
<pre><code> samples subject trial_num frequency
0 [0 2 2 1 1 2
1 [3 3 0 1 2 2
2 [1 1 1 1 3 1
3 [0 1 2 2 1 3
4 [4 5 6 2 2 3
5 [0 8 8 2 3 2
</code></pre>
<p>The frequency here is the number of unique values in each list per sample. For example, <code>[0, 2, 2]</code> only have one unique value.</p>
<p>I can do the unique values in pandas without having a list, or implement it using for loop to go through each row access each list and .... but I want a better pandas way to do it.</p>
<p>Thanks.</p> | <p>You can use <code>collections.Counter</code> for the task:</p>
<pre><code>from collections import Counter
df['frequency'] = df['samples'].apply(lambda x: sum(v==1 for v in Counter(x).values()))
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> samples subject trial_num frequency
0 [0, 2, 2] 1 1 1
1 [3, 3, 0] 1 2 1
2 [1, 1, 1] 1 3 0
3 [0, 1, 2] 2 1 3
4 [4, 5, 6] 2 2 3
5 [0, 8, 8] 2 3 1
</code></pre>
<hr />
<p>EDIT: For updated question:</p>
<pre><code>df['frequency'] = df['samples'].apply(lambda x: len(set(x)))
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> samples subject trial_num frequency
0 [0, 2, 2] 1 1 2
1 [3, 3, 0] 1 2 2
2 [1, 1, 1] 1 3 1
3 [0, 1, 2] 2 1 3
4 [4, 5, 6] 2 2 3
5 [0, 8, 8] 2 3 2
</code></pre> | python|pandas|dataframe | 2 |
16 | 47,705,684 | TensorFlow: `tf.data.Dataset.from_generator()` does not work with strings on Python 3.x | <p>I need to iterate through large number of image files and feed the data to tensorflow. I created a <code>Dataset</code> back by a generator function that produces the file path names as strings and then transform the string path to image data using <code>map</code>. But it failed as generating string values won't work, as shown below. Is there a fix or work around for this?</p>
<pre><code>2017-12-07 15:29:05.820708: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
producing data/miniImagenet/val/n01855672/n0185567200001000.jpg
2017-12-07 15:29:06.009141: W tensorflow/core/framework/op_kernel.cc:1192] Unimplemented: Unsupported object type str
2017-12-07 15:29:06.009215: W tensorflow/core/framework/op_kernel.cc:1192] Unimplemented: Unsupported object type str
[[Node: PyFunc = PyFunc[Tin=[DT_INT64], Tout=[DT_STRING], token="pyfunc_1"](arg0)]]
Traceback (most recent call last):
File "/Users/me/.tox/tf2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
File "/Users/me/.tox/tf2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
status, run_metadata)
File "/Users/me/.tox/tf2/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.UnimplementedError: Unsupported object type str
[[Node: PyFunc = PyFunc[Tin=[DT_INT64], Tout=[DT_STRING], token="pyfunc_1"](arg0)]]
[[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,21168]], output_types=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]
</code></pre>
<p>The test codes are shown below. It can work correctly with <code>from_tensor_slices</code> or by first putting the the file name list in a tensor. however, either work around would exhaust GPU memory.</p>
<pre><code>import tensorflow as tf
if __name__ == "__main__":
file_names = ['data/miniImagenet/val/n01855672/n0185567200001000.jpg',
'data/miniImagenet/val/n01855672/n0185567200001005.jpg']
# note: converting the file list to tensor and returning an index from generator works
# path_to_indexes = {p: i for i, p in enumerate(file_names)}
# file_names_tensor = tf.convert_to_tensor(file_names)
def dataset_producer():
for s in file_names:
print('producing', s)
yield s
dataset = tf.data.Dataset.from_generator(dataset_producer, output_types=(tf.string),
output_shapes=(tf.TensorShape([])))
# note: this would also work
# dataset = tf.data.Dataset.from_tensor_slices(tf.convert_to_tensor(file_names))
def read_image(filename):
# filename = file_names_tensor[filename_index]
image_file = tf.read_file(filename, name='read_file')
image = tf.image.decode_jpeg(image_file, channels=3)
image.set_shape((84,84,3))
image = tf.reshape(image, [21168])
image = tf.cast(image, tf.float32) / 255.0
return image
dataset = dataset.map(read_image)
dataset = dataset.batch(2)
data_iterator = dataset.make_one_shot_iterator()
images = data_iterator.get_next()
print('images', images)
max_value = tf.argmax(images)
with tf.Session() as session:
result = session.run(max_value)
print(result)
</code></pre> | <p>This is a bug affecting Python 3.x that was <a href="https://github.com/tensorflow/tensorflow/commit/17ba3a69f4c3509711a3da5eff3cb6be99e0936d#diff-6933e3bb88491e1a9d006c709aba017c" rel="nofollow noreferrer">fixed</a> after the TensorFlow 1.4 release. All releases of TensorFlow from 1.5 onwards contain the fix.</p>
<p>If you just use an earlier version, the workaround is to convert the strings to <code>bytes</code> before returning them from the generator. The following code should work:</p>
<pre><code>def dataset_producer():
for s in file_names:
print('producing', s)
yield s.encode('utf-8') # Convert `s` to `bytes`.
dataset = tf.data.Dataset.from_generator(dataset_producer, output_types=(tf.string),
output_shapes=(tf.TensorShape([])))
</code></pre> | tensorflow|tensorflow-datasets | 6 |
17 | 47,860,314 | Error while importing a file while working with jupyter notebook | <p>Recently I've been working with <code>jupyter</code> notebooks and was trying to read an excel file with pandas and it gives me the following error:</p>
<blockquote>
<p>FileNotFoundError: [Errno 2] No such file or directory</p>
</blockquote>
<p>But it works fine and reads the file with the exact same lines of code when i run it on <code>Spyder</code>. </p>
<p>Any advice on how to solve this issue?</p> | <p>Seems like an installation error,
Do this,</p>
<h1>For Python 2</h1>
<pre><code>pip install --upgrade --force-reinstall --no-cache-dir jupyter
</code></pre>
<h1>For Python 3</h1>
<pre><code>pip3 install --upgrade --force-reinstall --no-cache-dir jupyter
</code></pre> | python|pandas|path|jupyter-notebook | 1 |
18 | 47,760,015 | Python Dataframe: How to get alphabetically ordered list of column names | <p>I currently am able to get a list of all the column names in my dataframe using: </p>
<pre><code>df_EVENT5.columns.get_values()
</code></pre>
<p>But I want the list to be in alphabetical order ... how do I do that? </p> | <p>In order to get the list of column names in alphabetical order, try:</p>
<pre><code>df_EVENT5.columns.sort_values().values
</code></pre> | python|pandas|sorting|dataframe|field | 2 |
19 | 47,587,633 | How to reduce the processing time of reading a file using numpy | <p>I want to read a file and comparing some values, finding indexes of the repeated ones and deleting the repeated ones.
I am doing this process in while loop.
This is taking more processing time of about 76 sec.
Here is my code:</p>
<pre><code>Source = np.empty(shape=[0,7])
Source = CalData (# CalData is the log file data)
CalTab = np.empty(shape=[0,7])
Source = Source[Source[:, 4].argsort()] # Sort by Azimuth
while Source.size >=1:
temp = np.logical_and(Source[:,4]==Source[0,4],Source[:,5]==Source[0,5])
selarrayindex = np.argwhere(temp) # find indexes
selarray = Source[temp]
CalTab = np.append(CalTab, [selarray[selarray[:,6].argsort()][-1]], axis=0)
Source = np.delete(Source, selarrayindex, axis=0) #delete other rows with similar AZ, EL
</code></pre>
<p>while loop processing is taking more time.
Any other methods(Using normal python) with out using numpy or Efficient numpy
Please help!!</p> | <p>In any case, this should imporve your timings, I think:</p>
<pre><code>def lex_pick(Source):
idx = np.lexsort((Source[:, 6], Source[:, 5], Source[:, 4]))
# indices to sort by columns 4, then 5, then 6
# if dtype = float
mask = np.r_[np.logical_not(np.isclose(Source[idx[:-1], 5], Source[idx[1:], 5])), True]
# if dtype = int or string
mask = np.r_[Source[idx[:-1], 5] != Source[idx[1:], 5], True]
# `mask` is `True` in rows before where column 5 changes
return Source[idx[mask], 6]
</code></pre> | python|file|numpy | 0 |
20 | 47,767,546 | Select subset of numpy.ndarray based on other array's values | <p>
I have two numpy.ndarrays and I would like to select a subset of Array #2 based on the values in Array #1 (Criteria: Values > 1):</p>
<pre class="lang-py prettyprint-override"><code>#Array 1 - print(type(result_data):
<class 'numpy.ndarray'>
#print(result_data):
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
...
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 3 3 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1]
#Array #2 - print(type(test_data):
<class 'numpy.ndarray'>
#print(test_data):
[[-1.38693584 0.76183275]
[-1.38685102 0.76187584]
[-1.3869291 0.76186742]
...,
[-1.38662322 0.76160456]
[-1.38662322 0.76160456]
[-1.38662322 0.76160456]]
</code></pre>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code>x=0
selArray = np.empty
for i in result_data:
x+=1
if i > 1:
selArray = np.append(selArray,[test_data[x].T[0],test_data[x].T[1]])
</code></pre>
<p>...but this gives me:</p>
<pre class="lang-py prettyprint-override"><code>#print(type(selArray)):
<class 'numpy.ndarray'>
#print(selArray):
[<built-in function empty> -1.3868538952656493 0.7618747030055314
-1.3868543839578398 0.7618746157390688 -1.3870217784863983
0.7618121504051398 -1.3870217784863983 0.7618121504051398
-1.3870217784863983 0.7618121504051398 -1.3869304105000566
...
-1.3869682317849474 0.7617139232748376 -1.3869103741202438
0.7616839734248734 -1.3868025127724706 0.7616153994385625
-1.3869751607420777 0.761730050117126 -1.3866515941520503
0.7615994122226143 -1.3866515941520503 0.7615994122226143]
</code></pre>
<p>Clearly, <code>[]</code> are missing around elements - and I don't understand where the <code><built-in function empty></code> comes from.</p> | <p>It turned out to be pretty straight forward:</p>
<pre><code>selArray = test_data[result_data_>1]
</code></pre>
<p>See also possible solution in comment from Nain!</p> | python|arrays|numpy | 1 |
21 | 49,223,529 | Transforming extremely skewed data for regression analysis | <p>I have a Pandas Series from a housing data-set (size of the series = 48,2491), named "exempt_land". The first 10 entries of this series are: </p>
<pre><code>0 0.0
2 17227.0
3 0.0
7 0.0
10 0.0
14 7334.0
15 0.0
16 0.0
18 0.0
19 8238.0
Name: exempt_land, dtype: float64
</code></pre>
<p>As the data size is quite large, I did not perform <strong>dummy_variable</strong> transformation. </p>
<p>Now, my goal is to carry out regression analysis. Hence, I would like to transform this data to appear <strong>Normal</strong>.</p>
<p>The original data has a <strong>Skewness</strong> of <strong>344.58</strong> and <strong>Kurtosis</strong> = <strong>168317.32</strong>. To better understand the original data, I am also including the <strong>Distribution plot</strong> and <strong>Probability plot</strong> of the original data.</p>
<p><a href="https://i.stack.imgur.com/sZ8Sm.png" rel="nofollow noreferrer">Distribution Plot BEFORE transformation</a></p>
<p><a href="https://i.stack.imgur.com/ljrlW.png" rel="nofollow noreferrer">Probability Plot BEFORE transformation</a></p>
<p>After performing <strong>Log</strong> transformation, I get the <strong>Skewness</strong> of <strong>5.21</strong> and <strong>Kurtosis</strong> = <strong>25.96</strong>. The transformed <strong>Distribution</strong> and <strong>Probability</strong> plots now look as follows:</p>
<p><a href="https://i.stack.imgur.com/p1rk8.png" rel="nofollow noreferrer">Distribution Plot AFTER <strong>np.log10(exempt_land + 1)</strong> transformation</a></p>
<p><a href="https://i.stack.imgur.com/3k63x.png" rel="nofollow noreferrer">Probability Plot AFTER <strong>np.log10(exempt_land + 1)</strong> transformation</a></p>
<p>I also performed various other transformations ("power", "exp", "box-cox", "reciprocal") and I got similar bad results (in reciprocal transformation case, the results were quite worse).</p>
<p>So my question is, how can I 'tame' this data to behave nicely when doing regression analysis. Furthermore, upon transformation, the <strong>skew</strong> of <strong>5.21</strong> is still quite high, will this create any problem?
What other transformations can I perform to make the data look more <strong>Normal</strong>?</p>
<p>I hope my questions are clear here. Any help from the community is greatly appreciated. Thank you so much in advance.</p> | <p>With all the zeros, you need to use a non-normal distribution. Some variety of Tobit might make sense here. (You can't transform discrete data and get less discrete data.)</p> | python|pandas|normal-distribution | 0 |
22 | 48,980,261 | pandas fillna is not working on subset of the dataset | <p>I want to impute the missing values for <code>df['box_office_revenue']</code> with the median specified by <code>df['release_date'] == x</code> and <code>df['genre'] == y</code> . </p>
<p>Here is my median finder function below.</p>
<pre><code>def find_median(df, year, genre, col_year, col_rev):
median = df[(df[col_year] == year) & (df[col_rev].notnull()) & (df[genre] > 0)][col_rev].median()
return median
</code></pre>
<p>The median function works. I checked. I did the code below since I was getting some CopyValue error.</p>
<pre><code>pd.options.mode.chained_assignment = None # default='warn'
</code></pre>
<p>I then go through the years and genres, <code>col_name = ['is_drama', 'is_horror', etc]</code> . </p>
<pre><code>i = df['release_year'].min()
while (i < df['release_year'].max()):
for genre in col_name:
median = find_median(df, i, genre, 'release_year', 'box_office_revenue')
df[(df['release_year'] == i) & (df[genre] > 0)]['box_office_revenue'].fillna(median, inplace=True)
print(i)
i += 1
</code></pre>
<p>However, nothing changed! </p>
<pre><code>len(df['box_office_revenue'].isnull())
</code></pre>
<p>The output was 35527. Meaning none of the null values in <code>df['box_office_revenue']</code> had been filled. </p>
<p>Where did I go wrong?</p>
<p>Here is a quick look at the data: The other columns are just binary variables</p>
<p><a href="https://i.stack.imgur.com/JuPHV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JuPHV.png" alt="enter image description here"></a></p> | <p>You mentioned</p>
<blockquote>
<p>I did the code below since I was getting some CopyValue error...</p>
</blockquote>
<p>The warning is important. You did not give your data, so I cannot actually check, but the problem is likely due to:</p>
<pre><code>df[(df['release_year'] == i) & (df[genre] > 0)]['box_office_revenue'].fillna(..)
</code></pre>
<p>Let's break this down:</p>
<p>First you select some rows with:</p>
<pre><code>df[(df['release_year'] == i) & (df[genre] > 0)]
</code></pre>
<p>Then from that, you select a columns with:</p>
<pre><code>...['box_office_revenue']
</code></pre>
<p>And now you have a problem...</p>
<h3>Why?</h3>
<p>The problem is that when you selected some rows (ie: not all), pandas was forced to create a copy of your dataframe. You then select a column of the <em>copy</em>!. Then you <code>fillna()</code> on the copy. Not super useful.</p>
<h3>How do I fix it?</h3>
<p>Select the column first:</p>
<pre><code>df['box_office_revenue'][(df['release_year'] == i) & (df[genre] > 0)].fillna(..)
</code></pre>
<p>By selecting the entire column first, pandas is not forced to make a copy, and thus subsequent operations should work as desired.</p> | python|pandas|missing-data | 3 |
23 | 58,836,772 | Is there support for functional layers api support in tensorflow 2.0? | <p>I'm working on converting our model from tensorflow 1.8.0 to 2.0 but using sequential api's is quite difficult for our current model.So if there any support for functional api's in 2.0 as it is not easy to use sequential api's.</p> | <p>Tensorflow 2.0 is more or less made around the keras apis. You can use the tf.keras.Model for creating both sequential as well as functional apis.</p> | python-3.x|tensorflow|tensorflow2.0 | 1 |
24 | 58,763,438 | Conditional ffill based on another column | <p>I'm trying to conditionally ffill a value until a second column encounters a value and then reset the first column value. Effectively the first column is an 'on' switch until the 'off' switch (second column) encounters a value. I've yet to have a working example using ffill and where.</p>
<p>Example input:</p>
<pre><code>Index Start End
0 0 0
1 0 0
2 1 0
3 0 0
4 0 0
5 0 0
6 0 1
7 0 0
8 1 0
9 0 0
10 0 0
11 0 0
12 0 1
13 0 1
14 0 0
</code></pre>
<p>Desired output:</p>
<pre><code>Index Start End
0 0 0
1 0 0
2 1 0
3 1 0
4 1 0
5 1 0
6 1 1
7 0 0
8 1 0
9 1 0
10 1 0
11 1 0
12 1 1
13 0 1
14 0 0
</code></pre>
<p><strong>EDIT:</strong></p>
<p>There are issues when dealing with values set based on another column. The logic is as follows: Start should be zero until R column is below 25, then positive until R column is above 80 and the cycle should repeat. Yet on row 13 Start is inexplicably set 1 despite not matching criteria.</p>
<pre><code>df = pd.DataFrame(np.random.randint(0, 100, size=100), columns=['R'])
df['Start'] = np.where((df.R < 25), 1, 0)
df['End'] = np.where((df.R > 80), 1, 0)
df.loc[df['End'].shift().eq(0), 'Start'] = df['Start'].replace(0, np.nan).ffill().fillna(0).astype(int)
</code></pre>
<pre><code> R Start End
0 58 0 0
1 98 0 1
2 91 0 1
3 69 0 0
4 55 0 0
5 57 0 0
6 64 0 0
7 75 0 1
8 78 0 1
9 90 0 1
10 24 1 0
11 89 1 1
12 36 0 0
13 70 **1** 0
</code></pre> | <p>Try:</p>
<pre><code>df.loc[df['End'].shift().eq(0), 'Start'] = df['Start'].replace(0, np.nan).ffill().fillna(0).astype(int)
</code></pre>
<p>[out]</p>
<pre><code> Start End
0 0 0
1 0 0
2 1 0
3 1 0
4 1 0
5 1 0
6 1 1
7 0 0
8 1 0
9 1 0
10 1 0
11 1 0
12 1 1
13 0 1
14 0 0
</code></pre> | python|pandas | 2 |
25 | 58,705,193 | Why does calling np.array() on this list comprehension produce a 3d array instead of 2d? | <p>I have a script produces the first several iterations of a Markov matrix multiplying a given set of input values. With the matrix stored as <code>A</code> and the start values in the column <code>u0</code>, I use this list comprehension to store the output in an array:</p>
<pre><code>out = np.array([ ( (A**n) * u0).T for n in range(10) ])
</code></pre>
<p>The output has shape <code>(10,1,6)</code>, but I want the output in shape <code>(10,6)</code> instead. Obviously, I can fix this with <code>.reshape()</code>, but is there a way to avoid creating the extra dimension in the first place, perhaps by simplifying the list comprehension or the inputs?</p>
<p>Here's the full script and output:</p>
<pre><code>import numpy as np
# Random 6x6 Markov matrix
n = 6
A = np.matrix([ (lambda x: x/x.sum())(np.random.rand(n)) for _ in range(n)]).T
print(A)
#[[0.27457312 0.20195133 0.14400801 0.00814027 0.06026188 0.23540134]
# [0.21526648 0.17900277 0.35145882 0.30817386 0.15703758 0.21069114]
# [0.02100412 0.05916883 0.18309142 0.02149681 0.22214047 0.15257011]
# [0.17032696 0.11144443 0.01364982 0.31337906 0.25752732 0.1037133 ]
# [0.03081507 0.2343255 0.2902935 0.02720764 0.00895182 0.21920371]
# [0.28801424 0.21410713 0.01749843 0.32160236 0.29408092 0.07842041]]
# Random start values
u0 = np.matrix(np.random.randint(51, size=n)).T
print(u0)
#[[31]
# [49]
# [44]
# [29]
# [10]
# [ 0]]
# Find the first 10 iterations of the Markov process
out = np.array([ ( (A**n) * u0).T for n in range(10) ])
print(out)
#[[[31. 49. 44. 29. 10.
# 0. ]]
#
# [[25.58242101 41.41600236 14.45123543 23.00477134 26.08867045
# 32.45689942]]
#
# [[26.86917065 36.02438292 16.87560159 26.46418685 22.66236879
# 34.10428921]]
#
# [[26.69224394 37.06346073 16.59208202 26.48817955 22.56696872
# 33.59706504]]
#
# [[26.68772374 36.99727159 16.49987315 26.5003184 22.61130862
# 33.7035045 ]]
#
# [[26.68766363 36.98517264 16.50532933 26.51717543 22.592951
# 33.71170797]]
#
# [[26.68695152 36.98895204 16.50314718 26.51729716 22.59379049
# 33.70986161]]
#
# [[26.68682195 36.98848867 16.50286371 26.51763013 22.59362679
# 33.71056876]]
#
# [[26.68681128 36.98850409 16.50286036 26.51768807 22.59359453
# 33.71054167]]
#
# [[26.68680313 36.98851046 16.50285038 26.51769497 22.59359219
# 33.71054886]]]
print(out.shape)
#(10, 1, 6)
out = out.reshape(10,n)
print(out)
#[[31. 49. 44. 29. 10. 0. ]
# [25.58242101 41.41600236 14.45123543 23.00477134 26.08867045 32.45689942]
# [26.86917065 36.02438292 16.87560159 26.46418685 22.66236879 34.10428921]
# [26.69224394 37.06346073 16.59208202 26.48817955 22.56696872 33.59706504]
# [26.68772374 36.99727159 16.49987315 26.5003184 22.61130862 33.7035045 ]
# [26.68766363 36.98517264 16.50532933 26.51717543 22.592951 33.71170797]
# [26.68695152 36.98895204 16.50314718 26.51729716 22.59379049 33.70986161]
# [26.68682195 36.98848867 16.50286371 26.51763013 22.59362679 33.71056876]
# [26.68681128 36.98850409 16.50286036 26.51768807 22.59359453 33.71054167]
# [26.68680313 36.98851046 16.50285038 26.51769497 22.59359219 33.71054886]]
</code></pre> | <p>I think your confusion lies with how arrays can be joined. </p>
<p>Start with a simple 1d array (in <code>numpy</code> 1d is a real thing, not just a 'row vector' or 'column vector'):</p>
<pre><code>In [288]: arr = np.arange(6)
In [289]: arr
Out[289]: array([0, 1, 2, 3, 4, 5])
</code></pre>
<p><code>np.array</code> joins element arrays along a new 1st dimension:</p>
<pre><code>In [290]: np.array([arr,arr])
Out[290]:
array([[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5]])
</code></pre>
<p><code>np.stack</code> with the default axis value does the same thing. Read its docs.</p>
<p>We can make a 2d array, a column vector:</p>
<pre><code>In [291]: arr1 = arr[:,None]
In [292]: arr1
Out[292]:
array([[0],
[1],
[2],
[3],
[4],
[5]])
In [293]: arr1.shape
Out[293]: (6, 1)
</code></pre>
<p>Using <code>np.array</code> on its transpose the (1,6) arrays:</p>
<pre><code>In [294]: np.array([arr1.T, arr1.T])
Out[294]:
array([[[0, 1, 2, 3, 4, 5]],
[[0, 1, 2, 3, 4, 5]]])
In [295]: _.shape
Out[295]: (2, 1, 6)
</code></pre>
<p>Note the middle size 1 dimension, that bothered you.</p>
<p><code>np.vstack</code> joins the arrays along the existing 1st dimension. It does not add one:</p>
<pre><code>In [296]: np.vstack([arr1.T, arr1.T])
Out[296]:
array([[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5]])
</code></pre>
<p>Or we could join the arrays horizontally, on the 2nd dimension:</p>
<pre><code>In [297]: np.hstack([arr1, arr1])
Out[297]:
array([[0, 0],
[1, 1],
[2, 2],
[3, 3],
[4, 4],
[5, 5]])
</code></pre>
<p>That is (6,2) which can be transposed to (2,6):</p>
<pre><code>In [298]: np.hstack([arr1, arr1]).T
Out[298]:
array([[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5]])
</code></pre> | python|numpy | 1 |
26 | 58,650,432 | pandas df.at utterly slow in some lines | <p>I've got a .txt logfile with IMU sensor measurements which need to be parsed to a .CSV file. Accelerometer, gyroscope have 500Hz ODR (output data rate) magnetomer 100Hz, gps 1Hz and baro 1Hz. Wi-fi, BLE, pressure, light etc. is also logged but most is not needed. The smartphone App doesn't save all measurements sequentially. </p>
<p>It takes 1000+ seconds to parse a file of 200k+ lines to a pandas DataFrame sort the DataFrame on the timestamps and save it as a csv file.</p>
<p>When assigning values of sensor measurements at a coordinate (Row=Timestamp, column=sensor measurement) in the DataFrame, some need ~40% of the runtime, while others take +- 0.1% of the runtime. </p>
<p>What could be the reason for this?
It shouldn't take a 1000+ seconds..<hr> </p>
<h3>What is in the logfile:</h3>
<pre><code>ACCE;AppTimestamp(s);SensorTimestamp(s);Acc_X(m/s^2);Acc_Y(m/s^2);Acc_Z(m/s^2);Accuracy(integer)
GYRO;AppTimestamp(s);SensorTimestamp(s);Gyr_X(rad/s);Gyr_Y(rad/s);Gyr_Z(rad/s);Accuracy(integer)
MAGN;AppTimestamp(s);SensorTimestamp(s);Mag_X(uT);;Mag_Y(uT);Mag_Z(uT);Accuracy(integer)
MAGN;AppTimestamp(s);SensorTimestamp(s);Mag_X(uT);;Mag_Y(uT);Mag_Z(uT);Accuracy(integer)
PRES;AppTimestamp(s);SensorTimestamp(s);Pres(mbar);Accuracy(integer)
LIGH;AppTimestamp(s);SensorTimestamp(s);Light(lux);Accuracy(integer)
PROX;AppTimestamp(s);SensorTimestamp(s);prox(?);Accuracy(integer)
HUMI;AppTimestamp(s);SensorTimestamp(s);humi(Percentage);Accuracy(integer)
TEMP;AppTimestamp(s);SensorTimestamp(s);temp(Celsius);Accuracy(integer)
AHRS;AppTimestamp(s);SensorTimestamp(s);PitchX(deg);RollY(deg);YawZ(deg);RotVecX();RotVecY();RotVecZ();Accuracy(int)
GNSS;AppTimestamp(s);SensorTimeStamp(s);Latit(deg);Long(deg);Altitude(m);Bearing(deg);Accuracy(m);Speed(m/s);SatInView;SatInUse
WIFI;AppTimestamp(s);SensorTimeStamp(s);Name_SSID;MAC_BSSID;RSS(dBm);
BLUE;AppTimestamp(s);Name;MAC_Address;RSS(dBm);
BLE4;AppTimestamp(s);MajorID;MinorID;RSS(dBm);
SOUN;AppTimestamp(s);RMS;Pressure(Pa);SPL(dB);
RFID;AppTimestamp(s);ReaderNumber(int);TagID(int);RSS_A(dBm);RSS_B(dBm);
IMUX;AppTimestamp(s);SensorTimestamp(s);Counter;Acc_X(m/s^2);Acc_Y(m/s^2);Acc_Z(m/s^2);Gyr_X(rad/s);Gyr_Y(rad/s);Gyr_Z(rad/s);Mag_X(uT);;Mag_Y(uT);Mag_Z(uT);Roll(deg);Pitch(deg);Yaw(deg);Quat(1);Quat(2);Quat(3);Quat(4);Pressure(mbar);Temp(Celsius)
IMUL;AppTimestamp(s);SensorTimestamp(s);Counter;Acc_X(m/s^2);Acc_Y(m/s^2);Acc_Z(m/s^2);Gyr_X(rad/s);Gyr_Y(rad/s);Gyr_Z(rad/s);Mag_X(uT);;Mag_Y(uT);Mag_Z(uT);Roll(deg);Pitch(deg);Yaw(deg);Quat(1);Quat(2);Quat(3);Quat(4);Pressure(mbar);Temp(Celsius)
POSI;Timestamp(s);Counter;Latitude(degrees); Longitude(degrees);floor ID(0,1,2..4);Building ID(0,1,2..3)
</code></pre>
<h3>A part of the RAW .txt logfile:</h3>
<pre><code>MAGN;1.249;343268.933;2.64000;-97.50000;-69.06000;0
GYRO;1.249;343268.934;0.02153;0.06943;0.09880;3
ACCE;1.249;343268.934;-0.24900;0.53871;9.59625;3 GNSS;1.250;1570711878.000;52.225976;5.174543;58.066;175.336;3.0;0.0;23;20
ACCE;1.253;343268.936;-0.26576;0.52674;9.58428;3
GYRO;1.253;343268.936;0.00809;0.06515;0.10002;3
ACCE;1.253;343268.938;-0.29450;0.49561;9.57710;3
GYRO;1.253;343268.938;0.00015;0.06088;0.10613;3
PRES;1.253;343268.929;1011.8713;3
GNSS;1.254;1570711878.000;52.225976;5.174543;58.066;175.336;3.0;0.0;23;20
ACCE;1.255;343268.940;-0.29450;0.49801;9.57710;3
GYRO;1.255;343268.940;-0.00596;0.05843;0.10979;3
ACCE;1.260;343268.942;-0.30647;0.50280;9.55795;3
GYRO;1.261;343268.942;-0.01818;0.05721;0.11529;3
MAGN;1.262;343268.943;2.94000;-97.74000;-68.88000;0
</code></pre>
<hr>
<p>fileContent are the strings of the txt file as showed above.</p>
<h3>Piece of the code: </h3>
<pre><code>def parseValues(line):
valArr = []
valArr = np.fromstring(line[5:], dtype=float, sep=";")
return (valArr)
i = 0
while i < len(fileContent):
if (fileContent[i][:4] == "ACCE"):
vals = parseValues(fileContent[i])
idx = vals[1] - initialSensTS
df.at[idx, 'ax'] = vals[2]
df.at[idx, 'ay'] = vals[3]
df.at[idx, 'az'] = vals[4]
df.at[idx, 'accStat'] = vals[5]
i += 1
</code></pre>
<p><hr>
The code works, but it's utterly slow at some of the df.at[idx, 'xx'] lines. </p>
<p>See Line # 28.</p>
<h3>Line profiler output:</h3>
<pre><code>Line # Hits Time Per Hit % Time Line Contents
==============================================================
22 1 1.0 1.0 0.0 i = 0
23 232250 542594.0 2.3 0.0 while i < len(fileContent):
24 232249 294337000.0 1267.3 23.8 update_progress(i / len(fileContent))
25 232249 918442.0 4.0 0.1 if (fileContent[i][:4] == "ACCE"):
26 54602 1584625.0 29.0 0.1 vals = parseValues(fileContent[i])
27 54602 316968.0 5.8 0.0 idx = vals[1] - initialSensTS
28 54602 504189480.0 9233.9 40.8 df.at[idx, 'ax'] = vals[2]
29 54602 8311109.0 152.2 0.7 df.at[idx, 'ay'] = vals[3]
30 54602 4901983.0 89.8 0.4 df.at[idx, 'az'] = vals[4]
31 54602 4428239.0 81.1 0.4 df.at[idx, 'accStat'] = vals[5]
32 54602 132590.0 2.4 0.0 i += 1
</code></pre> | <p>This doesn't address the part of your question about sorting timestamps etc, but should be an efficient replacement for your <code>'ACCE'</code> parsing code. </p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import collections as colls
logs_file_path = '../resources/imu_logs_raw.txt'
msmt_type_dict = colls.defaultdict(list)
with open(logs_file_path, 'r') as file_1:
for line in file_1:
curr_measure_type, *rest_str = line.split(';')
rest_str[-1] = rest_str[-1].strip()
msmt_type_dict[curr_measure_type].append(rest_str)
acce_df = pd.DataFrame(data=msmt_type_dict['ACCE'], columns=['app_timestamp', 'sensor_timestamp', 'acc_x', 'acc_y', 'acc_z', 'accuracy'])
</code></pre>
<p>If you can provide some more information/context I would love to take a look at the timestamp sorting aspect.</p> | python|python-3.x|pandas|numpy|jupyter-notebook | 0 |
27 | 70,330,361 | Count values from different columns of a dataframe | <p>Let's say I have the following dataframe.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
'home': ['team1', 'team2', 'team3', 'team2'],
'away': ['team2', 'team3', 'team1', 'team1']
}
df = pd.DataFrame(data)
</code></pre>
<p>How can I count the number of time each element (team) appears in both columns ?
The expected result is</p>
<pre><code>team1 3
team2 3
team3 2
</code></pre> | <p>You can concatenate the columns and use <code>.value_counts</code> method:</p>
<pre><code>out = pd.concat([df['home'], df['away']]).value_counts()
</code></pre>
<p>Output:</p>
<pre><code>team1 3
team2 3
team3 2
dtype: int64
</code></pre>
<p>You can also get the underlying numpy array, <code>flatten</code> it, find unique values and their counts, wrap it in a dictionary (this is by far the fastest method):</p>
<pre><code>out = dict(np.array(np.unique(df.values.flatten(), return_counts=True)).T)
</code></pre>
<p>Output:</p>
<pre><code>{'team1': 3, 'team2': 3, 'team3': 2}
</code></pre> | python|pandas | 3 |
28 | 70,334,785 | Rename items from a column in pandas | <p>I'm working in a dataset which I faced the following situation:</p>
<pre><code>df2['Shape'].value_counts(normalize=True)
</code></pre>
<pre><code>Round 0.574907
Princess 0.093665
Oval 0.082609
Emerald 0.068820
Radiant 0.059752
Pear 0.041739
Marquise 0.029938
Asscher 0.024099
Cushion 0.010807
Marwuise 0.005342
Uncut 0.004720
Marquis 0.003602
Name: Shape, dtype: float64
</code></pre>
<p>and my goal is to make the variables 'Marquis' and 'Marwise' be included into the variable 'Marquise'. How can I combine they?</p> | <p>Since you didn't state any restrictions, a quick fix will be that you can first change the entries the way you desire as shown below-</p>
<pre><code>df2['Shape'][df2['Shape'] == 'Marquis'] = 'Marquise'
df2['Shape'][df2['Shape'] == 'Marwise'] = 'Marquise'
</code></pre>
<p>Now, run this command,</p>
<pre><code>df2['Shape'].value_counts(normalize=True)
</code></pre> | python|pandas|dataframe | 2 |
29 | 70,154,686 | Replacing href dynamic tag in python (html body) | <p>I have a script that generates some body email from a dataframe to then send them to every user.
The problem is that my content is dynamic and so the links I am sending to every user (different links for different users)</p>
<p>The html body of the email is like:</p>
<pre><code><table border="2" class="dataframe">
<thead>
<tr style="text-align: center;">
<th style = "background-color: orange">AF</th>
<th style = "background-color: orange">Enlaces Forms</th>
</tr>
</thead>
<tbody>
<tr>
<td>71</td>
<td><a href="https://forms.office.com/Pages/ResponsePage.aspx?id=uIG64v4DfECWMjVIRUVBVjVBSCQlQCNjPTEkJUAjdD1n" target="_blank">https://forms.office.com/Pages/ResponsePage.aspx?id=uIG64v4DfECWofS8D1EufUjVIRUVBVjVBSCQlQCNjPTEkJUAjdD1n</a></td>
</tr>
<tr>
<td>64</td>
<td><a href="https://forms.office.com/Pages/ResponsePage.aspx?id=uIG64v4DfECWofS8D1EufU4jQyVDREMk4zOSQlQCNjPTEkJUAjdD1n" target="_blank">https://forms.office.com/Pages/ResponsePage.aspx?id=uIG64v4DfECWofS8D1EufUVVGWFRUNjQyVDREMk4zOSQlQCNjPTEkJUAjdD1n</a></td>
</tr>
</tbody>
</table>
</code></pre>
<p>I am replacing html tags like this:</p>
<pre><code>table2=df[['AF','Links']].to_html(index=False, render_links=True, escape=False).replace('<tr style="text-align: right;">','<tr style="text-align: center;">').replace('<table border="1"','<table border="2"').replace('<th>','<th style = "background-color: orange">').replace(f'<td><a href="{enlace}"','<td><a href="LINK"')
</code></pre>
<p>but I do not know how to make it work for the tag <strong>"href"</strong>.
My goal is to rename the hyperlinks with some words to make them more readable in the body mail.</p>
<p>How can I do that?</p>
<p>EDIT:
When I try to implement jinja2 Template:</p>
<pre><code>import jinja2
from jinja2 import Template
temp2='<a href=""> </a>'
linkdef=Template(temp2).render(url=f"{enlace_tabla['LINKS']}",enlace="Flask")
table2=enlace_tabla[['AF',linkdef]].to_html(index=False, render_links=True, escape=False).replace('<tr style="text-align: right;">','<tr style="text-align: center;">').replace('<table border="1"','<table border="2"').replace('<th>','<th style = "background-color: orange">')
</code></pre>
<p>The following error is raised:</p>
<pre><code>KeyError: '[\'<a href=""> </a>\'] not in index'
</code></pre> | <p>I would do this in different way.</p>
<p>First I would create column with <code><a href="{url}">SOME TEXT</a></code></p>
<pre class="lang-py prettyprint-override"><code>def convert(row):
return f'<a href={row["LINKS"]}>CLICK THIS LINK</a>'
df['LINKS_HTML'] = df.apply(convert, axis=1)
</code></pre>
<p>If I would have column with text for every link then it could be</p>
<pre class="lang-py prettyprint-override"><code>def convert(row):
return f'<a href={row["LINKS"]}>{row["TEXT"]}</a>'
df['LINKS_HTML'] = df.apply(convert, axis=1)
</code></pre>
<p>And later I would render table using column <code>LINKS_HTML</code> instead of <code>LINKS</code><br />
(and without <code>render_links=True</code>)</p>
<pre class="lang-py prettyprint-override"><code>html = df[['AF', 'LINKS_HTML']].to_html(escape=False, index=False)
</code></pre>
<hr />
<p>Minimal working example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
'AF': [1, 2, 3],
'LINKS': [
'https://httpbin.org/get?arg=101',
'https://httpbin.org/get?arg=102',
'https://httpbin.org/get?arg=103',
],
'TEXT': ['Text 1', 'Text 2', 'Text 3']
}
df = pd.DataFrame(data)
#print(df)
def convert(row):
#return f'<a href={row["LINKS"]}>CLICK THIS LINK</a>'
return f'<a href={row["LINKS"]}>{row["TEXT"]}</a>'
df['LINKS_HTML'] = df.apply(convert, axis=1)
html = df[['AF', 'LINKS_HTML']].to_html(escape=False, index=False)
print(html)
</code></pre>
<p>Result:</p>
<pre><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>AF</th>
<th>LINKS_HTML</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><a href=https://httpbin.org/get?arg=101>Text 1</a></td>
</tr>
<tr>
<td>2</td>
<td><a href=https://httpbin.org/get?arg=102>Text 2</a></td>
</tr>
<tr>
<td>3</td>
<td><a href=https://httpbin.org/get?arg=103>Text 3</a></td>
</tr>
</tbody>
</table>
</code></pre>
<hr />
<p>Or I would use <code>jinja2</code> to generate table without <code>to_html()</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import jinja2
data = {
'AF': [1, 2, 3],
'LINKS': [
'https://httpbin.org/get?arg=101',
'https://httpbin.org/get?arg=102',
'https://httpbin.org/get?arg=103',
],
'TEXT': ['Text 1', 'Text 2', 'Text 3']
}
df = pd.DataFrame(data)
template = '''<table border="2" class="dataframe">
<thead>
<tr style="text-align: center;">
<th style="background-color: orange">AF</th>
<th style="background-color: orange">Enlaces Forms</th>
</tr>
</thead>
<tbody>
{%- for index, row in data.iterrows() %}
<tr>
<td>{{ row["AF"] }}</td>
<td><a href="{{ row["LINKS"] }}">{{ row["TEXT"] }}</a></td>
</tr>
{%- endfor %}
</tbody>
</table>
'''
html = jinja2.Template(template).render(data=df)
print(html)
</code></pre>
<p>Result:</p>
<pre><code><table border="2" class="dataframe">
<thead>
<tr style="text-align: center;">
<th style="background-color: orange">AF</th>
<th style="background-color: orange">Enlaces Forms</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><a href="https://httpbin.org/get?arg=101">Text 1</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://httpbin.org/get?arg=102">Text 2</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://httpbin.org/get?arg=103">Text 3</a></td>
</tr>
</tbody>
</table>
</code></pre> | python|html|pandas | 0 |
30 | 70,236,604 | xgboost model prediction error : Input numpy.ndarray must be 2 dimensional | <p>I have a model that's trained locally and deployed to an engine, so that I can make inferences / invoke endpoint. When I try to make predictions, I get the following exception.</p>
<pre><code>raise ValueError('Input numpy.ndarray must be 2 dimensional')
ValueError: Input numpy.ndarray must be 2 dimensional
</code></pre>
<p>My <code>model</code> is a xgboost model with some pre-processing (variable encoding) and hyper-parameter tuning. Code to train the model:</p>
<pre><code>import pandas as pd
import pickle
from xgboost import XGBRegressor
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
# split df into train and test
X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:21], df.iloc[:,-1], test_size=0.1)
X_train.shape
(1000,21)
# Encode categorical variables
cat_vars = ['cat1','cat2','cat3']
cat_transform = ColumnTransformer([('cat', OneHotEncoder(handle_unknown='ignore'), cat_vars)], remainder='passthrough')
encoder = cat_transform.fit(X_train)
X_train = encoder.transform(X_train)
X_test = encoder.transform(X_test)
X_train.shape
(1000,420)
# Define a xgboost regression model
model = XGBRegressor()
# Do hyper-parameter tuning
.....
# Fit model
model.fit(X_train, y_train)
</code></pre>
<p>Here's what <code>model</code> object looks like:</p>
<pre><code>XGBRegressor(colsample_bytree=xxx, gamma=xxx,
learning_rate=xxx, max_depth=x, n_estimators=xxx,
subsample=xxx)
</code></pre>
<p>My test data is a string of float values which is turned into an array as the data must be passed as numpy array.</p>
<pre><code>testdata = [........., 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 2000, 200, 85, 412412, 123, 41, 552, 50000, 512, 0.1, 10.0, 2.0, 0.05]
</code></pre>
<p>I have tried to reshape the numpy array from 1d to 2d, however, that doesn't work as the number of features between test data and trained model do not match.</p>
<p>My question is how do I pass a numpy array same as the length of # of features in trained model? Any work around ideas? I am able to make predictions by passing test data as a list locally.</p>
<p>More info on inference script here: <a href="https://github.com/aws-samples/amazon-sagemaker-local-mode/blob/main/xgboost_script_mode_local_training_and_serving/code/inference.py" rel="nofollow noreferrer">https://github.com/aws-samples/amazon-sagemaker-local-mode/blob/main/xgboost_script_mode_local_training_and_serving/code/inference.py</a></p>
<pre><code>Traceback (most recent call last):
File "/miniconda3/lib/python3.6/site-packages/sagemaker_containers/_functions.py", line 93, in wrapper
return fn(*args, **kwargs)
File "/opt/ml/code/inference.py", line 75, in predict_fn
prediction = model.predict(input_data)
File "/miniconda3/lib/python3.6/site-packages/xgboost/sklearn.py", line 448, in predict
test_dmatrix = DMatrix(data, missing=self.missing, nthread=self.n_jobs)
File "/miniconda3/lib/python3.6/site-packages/xgboost/core.py", line 404, in __init__
self._init_from_npy2d(data, missing, nthread)
File "/miniconda3/lib/python3.6/site-packages/xgboost/core.py", line 474, in _init_from_npy2d
raise ValueError('Input numpy.ndarray must be 2 dimensional')
ValueError: Input numpy.ndarray must be 2 dimensional
</code></pre>
<p>When I attempt to reshape the test data to 2d numpy array, using <code>testdata.reshape(-1,1)</code>, I run into <code>feature_names</code> mismatch exception.</p>
<pre><code>File "/opt/ml/code/inference.py", line 75, in predict_fn
3n0u6hucsr-algo-1-qbiyg | prediction = model.predict(input_data)
3n0u6hucsr-algo-1-qbiyg | File "/miniconda3/lib/python3.6/site-packages/xgboost/sklearn.py", line 456, in predict
3n0u6hucsr-algo-1-qbiyg | validate_features=validate_features)
3n0u6hucsr-algo-1-qbiyg | File "/miniconda3/lib/python3.6/site-packages/xgboost/core.py", line 1284, in predict
3n0u6hucsr-algo-1-qbiyg | self._validate_features(data)
3n0u6hucsr-algo-1-qbiyg | File "/miniconda3/lib/python3.6/site-packages/xgboost/core.py", line 1690, in _validate_features
3n0u6hucsr-algo-1-qbiyg | data.feature_names))
3n0u6hucsr-algo-1-qbiyg | ValueError: feature_names mismatch: ['f0', 'f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15',
</code></pre>
<p>Update: I can retrieve the feature names for the model by running <code>model.get_booster().feature_names</code>. Is there a way I can use these names and assign to test data point so that they are consistent?</p>
<pre><code>['f0', 'f1', 'f2', 'f3', 'f4', 'f5',......'f417','f418','f419']
</code></pre> | <p>I think the solution is to provide the test data as the same data type as the train data.</p>
<p>Thank you for the comment. With the added information that after encoding the datatype of <code>X_train</code> is <code>scipy.sparse.csr.csr_matrix</code> and <code>y_train</code> is a <code>Pandas series</code>. If there are no memory constrains we can transform both to numpy array by using:</p>
<pre><code>model.fit(X_train.toarray(), y_train.to_numpy())
</code></pre>
<p>Reference to:</p>
<ul>
<li>scipy manual: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.toarray.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.toarray.html</a></li>
<li>pandas manual: <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.to_numpy.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.Series.to_numpy.html</a></li>
</ul> | python|amazon-web-services|numpy|amazon-sagemaker | 1 |
31 | 56,357,418 | Get the average mean of entries per month with datetime in Pandas | <p>I have a large df with many entries per month. I would like to see the average entries per month as to see as an example if there are any months that normally have more entries. (Ideally I'd like to plot this with a line of the over all mean to compare with but that is maybe a later question).
My df is something like this: </p>
<pre><code>ufo=pd.read_csv('https://raw.githubusercontent.com/justmarkham/pandas-videos/master/data/ufo.csv')
ufo['Time']=pd.to_datetime(ufo.Time)
</code></pre>
<p>Where the head looks like this:
<a href="https://i.stack.imgur.com/ec4q2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ec4q2.png" alt="enter image description here"></a></p>
<p>So if I'd like to see if there are more ufo-sightings in the summer as an example, how would I go about?</p>
<p>I have tried: </p>
<pre><code>ufo.groupby(ufo.Time.month).mean()
</code></pre>
<p>But it does only work if I am calculating a numerical value. If I use <code>count()</code>instead I get the sum of all entries for all months. </p>
<p>EDIT: To clarify, I would like to have the mean of entries - ufo-sightings - per month. </p> | <p>You could do something like this:</p>
<pre><code># count the total months in the records
def total_month(x):
return x.max().year -x.min().year + 1
new_df = ufo.groupby(ufo.Time.dt.month).Time.agg(['size', total_month])
new_df['mean_count'] = new_df['size'] /new_df['total_month']
</code></pre>
<p>Output:</p>
<pre><code> size total_month mean_count
Time
1 862 57 15.122807
2 817 70 11.671429
3 1096 55 19.927273
4 1045 68 15.367647
5 1168 53 22.037736
6 3059 71 43.084507
7 2345 65 36.076923
8 1948 64 30.437500
9 1635 67 24.402985
10 1723 65 26.507692
11 1509 50 30.180000
12 1034 56 18.464286
</code></pre> | python|pandas | 2 |
32 | 56,397,461 | How do I select the minimum and maximum values for a horizontal lollipop plot/dumbbell chart? | <p>I have created a dumbbell chart but I am getting too many minimum and maximum values for each category type. I want to display only one skyblue dot (the minimum price) and one green dot (the maximum price) per area. </p>
<p>This is what the chart looks like so far:</p>
<p><a href="https://i.stack.imgur.com/ZAXsV.png" rel="nofollow noreferrer">My dumbbell chart</a></p>
<p>Here is my DataFrame:</p>
<p><a href="https://i.stack.imgur.com/AnCzc.png" rel="nofollow noreferrer">The DataFrame</a></p>
<p>Here is a link to the full dataset:</p>
<p><a href="https://drive.google.com/open?id=1PpI6PlO8ox2vKfM4aGmEUexCPPWa59S_" rel="nofollow noreferrer">https://drive.google.com/open?id=1PpI6PlO8ox2vKfM4aGmEUexCPPWa59S_</a> </p>
<p>And here is my code so far:</p>
<pre><code> import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
db = df[['minPrice','maxPrice', 'neighbourhood_hosts']]
ordered_db = db.sort_values(by='minPrice')
my_range=db['neighbourhood_hosts']
plt.figure(figsize=(8,6))
plt.hlines(y=my_range, xmin=ordered_db['minPrice'], xmax=ordered_db['maxPrice'], color='grey', alpha=0.4)
plt.scatter(ordered_db['minPrice'], my_range, color='skyblue', alpha=1, label='minimum price')
plt.scatter(ordered_db['maxPrice'], my_range, color='green', alpha=0.4 , label='maximum price')
plt.legend()
plt.title("Comparison of the minimum and maximum prices")
plt.xlabel('Value range')
plt.ylabel('Area')
</code></pre>
<p>How can I format my code so that I only have one minimum and one maximum value for each area?</p> | <p>As per conversation, here is the script:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('dumbbell data.csv')
db = df[['minPrice','maxPrice', 'neighbourhood_hosts']]
#create max and min price based on area name
max_price = db.groupby(['neighbourhood_hosts'])['maxPrice'].max().reset_index()
min_price = db.groupby(['neighbourhood_hosts'])['minPrice'].min().reset_index()
var_price = pd.DataFrame()
var_price['range'] = max_price.maxPrice-min_price.minPrice
var_price['neighbourhood_hosts'] = min_price['neighbourhood_hosts']
var_price = var_price.sort_values(by='range')
#sort max and min price according to variance
max_price = max_price.reindex(var_price.index)
min_price = min_price.reindex(var_price.index)
plt.figure(figsize=(8,6))
plt.hlines(y=min_price['neighbourhood_hosts'], xmin=min_price['minPrice'], xmax=max_price['maxPrice'], color='grey', alpha=0.4)
plt.scatter(min_price['minPrice'], min_price['neighbourhood_hosts'], color='skyblue', alpha=1, label='minimum price')
plt.scatter(max_price['maxPrice'], max_price['neighbourhood_hosts'], color='green', alpha=0.4 , label='maximum price')
plt.legend()
plt.title("Comparison of the minimum and maximum prices")
plt.xlabel('Value range')
plt.ylabel('Area')
</code></pre>
<p><a href="https://i.stack.imgur.com/bHC5e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bHC5e.png" alt="enter image description here"></a></p> | python|pandas|numpy|matplotlib|seaborn | 1 |
33 | 55,642,036 | Finding the indexes of the N maximum values across an axis in Pandas | <p>I know that there is a method .argmax() that returns the indexes of the maximum values across an axis.</p>
<p>But what if we want to get the indexes of the 10 highest values across an axis? </p>
<p>How could this be accomplished?</p>
<p>E.g.:</p>
<pre><code>data = pd.DataFrame(np.random.random_sample((50, 40)))
</code></pre> | <p>IIUC, say, if you want to get the index of the top 10 largest numbers of column <code>col</code>:</p>
<pre><code>data[col].nlargest(10).index
</code></pre> | python|pandas|argmax | 0 |
34 | 55,768,432 | How to create multiple line graph using seaborn and find rate? | <p>I need help to create a multiple line graph using below DataFrame</p>
<pre><code> num user_id first_result second_result result date point1 point2 point3 point4
0 0 1480R clear clear pass 9/19/2016 clear consider clear consider
1 1 419M consider consider fail 5/18/2016 consider consider clear clear
2 2 416N consider consider fail 11/15/2016 consider consider consider consider
3 3 1913I consider consider fail 11/25/2016 consider consider consider clear
4 4 1938T clear clear pass 8/1/2016 clear consider clear clear
5 5 1530C clear clear pass 6/22/2016 clear clear consider clear
6 6 1075L consider consider fail 9/13/2016 consider consider clear consider
7 7 1466N consider clear fail 6/21/2016 consider clear clear consider
8 8 662V consider consider fail 11/1/2016 consider consider clear consider
9 9 1187Y consider consider fail 9/13/2016 consider consider clear clear
10 10 138T consider consider fail 9/19/2016 consider clear consider consider
11 11 1461Z consider clear fail 7/18/2016 consider consider clear consider
12 12 807N consider clear fail 8/16/2016 consider consider clear clear
13 13 416Y consider consider fail 10/2/2016 consider clear clear clear
14 14 638A consider clear fail 6/21/2016 consider clear consider clear
</code></pre>
<p>data file linke <a href="https://drive.google.com/file/d/1seiLsvzMiDXx_OehdRvk3uoYrQwzG35p/view?usp=sharing" rel="nofollow noreferrer">data.xlsx</a> or data as dict</p>
<pre><code>data = {'num': {0: 0,
1: 1,
2: 2,
3: 3,
4: 4,
5: 5,
6: 6,
7: 7,
8: 8,
9: 9,
10: 10,
11: 11,
12: 12,
13: 13,
14: 14},
'user_id': {0: '1480R',
1: '419M',
2: '416N',
3: '1913I',
4: '1938T',
5: '1530C',
6: '1075L',
7: '1466N',
8: '662V',
9: '1187Y',
10: '138T',
11: '1461Z',
12: '807N',
13: '416Y',
14: '638A'},
'first_result': {0: 'clear',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'consider',
8: 'consider',
9: 'consider',
10: 'consider',
11: 'consider',
12: 'consider',
13: 'consider',
14: 'consider'},
'second_result': {0: 'clear',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'clear',
8: 'consider',
9: 'consider',
10: 'consider',
11: 'clear',
12: 'clear',
13: 'consider',
14: 'clear'},
'result': {0: 'pass',
1: 'fail',
2: 'fail',
3: 'fail',
4: 'pass',
5: 'pass',
6: 'fail',
7: 'fail',
8: 'fail',
9: 'fail',
10: 'fail',
11: 'fail',
12: 'fail',
13: 'fail',
14: 'fail'},
'date': {0: '9/19/2016',
1: '5/18/2016',
2: '11/15/2016',
3: '11/25/2016',
4: '8/1/2016',
5: '6/22/2016',
6: '9/13/2016',
7: '6/21/2016',
8: '11/1/2016',
9: '9/13/2016',
10: '9/19/2016',
11: '7/18/2016',
12: '8/16/2016',
13: '10/2/2016',
14: '6/21/2016'},
'point1': {0: 'clear',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'consider',
8: 'consider',
9: 'consider',
10: 'consider',
11: 'consider',
12: 'consider',
13: 'consider',
14: 'consider'},
'point2': {0: 'consider',
1: 'consider',
2: 'consider',
3: 'consider',
4: 'consider',
5: 'clear',
6: 'consider',
7: 'clear',
8: 'consider',
9: 'consider',
10: 'clear',
11: 'consider',
12: 'consider',
13: 'clear',
14: 'clear'},
'point3': {0: 'clear',
1: 'clear',
2: 'consider',
3: 'consider',
4: 'clear',
5: 'consider',
6: 'clear',
7: 'clear',
8: 'clear',
9: 'clear',
10: 'consider',
11: 'clear',
12: 'clear',
13: 'clear',
14: 'consider'},
'point4': {0: 'consider',
1: 'clear',
2: 'consider',
3: 'clear',
4: 'clear',
5: 'clear',
6: 'consider',
7: 'consider',
8: 'consider',
9: 'clear',
10: 'consider',
11: 'consider',
12: 'clear',
13: 'clear',
14: 'clear'}
}
</code></pre>
<p>I need to create a bar graph and a line graph, I have created the bar graph using <code>point1</code> where x = consider, clear and y = count of consider and clear</p>
<p>but I have no idea how to create a line graph by this scenario</p>
<p>x = date</p>
<p>y = pass rate (%)</p>
<p>Pass Rate is a number of clear/(consider + clear)</p>
<p>graph the rate for first_result, second_result, result all on the same graph</p>
<p>and the graph should look like below<a href="https://i.stack.imgur.com/JNP52.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JNP52.png" alt="line graph"></a></p>
<p>please comment or answer how can I do it. if I can get an idea of grouping dates and getting the ratio then also great.</p> | <p>Here's my idea how to do it:</p>
<pre><code># first convert all `clear`, `consider` to 1,0
tmp_df = df[['first_result', 'second_result']].apply(lambda x: x.eq('clear').astype(int))
# convert `pass`, `fail` to 1,0
tmp_df['result'] = df.result.eq('pass').astype(int)
# copy the date
tmp_df['date'] = df['date']
# groupby and compute mean, i.e. number_pass/total_count
tmp_df = tmp_df.groupby('date').mean()
tmp_df.plot()
</code></pre>
<p>Output for this dataset</p>
<p><a href="https://i.stack.imgur.com/RfMtd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RfMtd.png" alt="enter image description here"></a></p> | pandas|matplotlib|seaborn|linegraph | 0 |
35 | 64,971,775 | How to compare columns with equal values? | <p>I have a dataframe which looks as follows:</p>
<pre><code> colA colB
0 2 1
1 4 2
2 3 7
3 8 5
4 7 2
</code></pre>
<p>I have two datasets one with customer code and other information and the other with addresses plus related customer code.</p>
<p>I did a merge with the two bases and now I want to return the lines where the values in the columns are the same, but I'm not able to do it.</p>
<p>Can someone help me?</p>
<p>Thanks</p> | <p>you can try :</p>
<pre><code>dfs=df.loc[df['colA']==df['colB']]
</code></pre> | pandas | 0 |
36 | 39,576,340 | rename the pandas Series | <p>I have some wire thing when renaming the pandas Series by the datetime.date</p>
<pre><code>import pandas as pd
a = pd.Series([1, 2, 3, 4], name='t')
</code></pre>
<p>I got <code>a</code> is:</p>
<pre><code>0 1
1 2
2 3
3 4
Name: t, dtype: int64
</code></pre>
<p>Then, I have:</p>
<pre><code>ts = pd.Series([pd.Timestamp('2016-05-16'),
pd.Timestamp('2016-05-17'),
pd.Timestamp('2016-05-18'),
pd.Timestamp('2016-05-19')], name='time')
</code></pre>
<p>with <code>ts</code> as:</p>
<pre><code>0 2016-05-16
1 2016-05-17
2 2016-05-18
3 2016-05-19
Name: time, dtype: datetime64[ns]
</code></pre>
<p>Now, if I do:</p>
<pre><code>ts_date = ts.apply(lambda x: x.date())
dates = ts_date.unique()
</code></pre>
<p>I got <code>dates</code> as:</p>
<pre><code>array([datetime.date(2016, 5, 16), datetime.date(2016, 5, 17),
datetime.date(2016, 5, 18), datetime.date(2016, 5, 19)], dtype=object)
</code></pre>
<hr>
<p>I have two approaches. The wired thing is, if I do the following renaming (approach 1):</p>
<pre><code>for one_date in dates:
a.rename(one_date)
print one_date, a.name
</code></pre>
<p>I got:</p>
<pre><code>2016-05-16 t
2016-05-17 t
2016-05-18 t
2016-05-19 t
</code></pre>
<p>But if I do it like this (approach 2):</p>
<pre><code>for one_date in dates:
a = pd.Series(a, name=one_date)
print one_date, a.name
2016-05-16 2016-05-16
2016-05-17 2016-05-17
2016-05-18 2016-05-18
2016-05-19 2016-05-19
</code></pre>
<hr>
<p>My question is: why the method <code>rename</code> does not work (in approach 1)?</p> | <p>Because <code>rename</code> does not change the object unless you set the <code>inplace</code> argument as <code>True</code>, as seen in the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rename.html" rel="nofollow">docs</a>.</p>
<p>Notice that the <code>copy</code> argument can be used so you don't have to create a new series passing the old series as argument, like in your second example.</p> | python|python-2.7|pandas | 2 |
37 | 39,598,618 | Pandas Filter on date for quarterly ends | <p>In the index column I have a list of dates:</p>
<pre><code>DatetimeIndex(['2010-12-31', '2011-01-02', '2011-01-03', '2011-01-29',
'2011-02-26', '2011-02-28', '2011-03-26', '2011-03-31',
'2011-04-01', '2011-04-03',
...
'2016-02-27', '2016-02-29', '2016-03-26', '2016-03-31',
'2016-04-01', '2016-04-03', '2016-04-30', '2016-05-31',
'2016-06-30', '2016-07-02'],
dtype='datetime64[ns]', length=123, freq=None)
</code></pre>
<p>However I want to filter out all those which the month and day equal to 12/31, 3/31, 6/30, 9/30 to get the value at the end of the quarter. </p>
<p>Is there a good way of going about this?</p> | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.is_quarter_end.html#pandas.Series.dt.is_quarter_end" rel="noreferrer"><code>is_quarter_end</code></a> to filter the row labels:</p>
<pre><code>In [151]:
df = pd.DataFrame(np.random.randn(400,1), index= pd.date_range(start=dt.datetime(2016,1,1), periods=400))
df.loc[df.index.is_quarter_end]
Out[151]:
0
2016-03-31 -0.474125
2016-06-30 0.931780
2016-09-30 -0.281271
2016-12-31 0.325521
</code></pre> | python|datetime|pandas | 5 |
38 | 39,715,686 | Cannot get pandas to open CSV [Python, Jupyter, Pandas] | <p><strong>OBJECTIVE</strong></p>
<p>Using Jupyter notebooks, import a csv file for data manipulation</p>
<p><strong>APPROACH</strong></p>
<ol>
<li>Import necessary libraries for statistical analysis (pandas, matplotlib, sklearn, etc.)</li>
<li><strong>Import data set using pandas</strong></li>
<li>Manipulate data</li>
</ol>
<p><strong>CODE</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
import pandas as pd
from sklearn.cluster import KMeans
data = pd.read_csv("../data/walmart-stores.csv")
print(data)
</code></pre>
<p><strong>ERROR</strong></p>
<pre><code>OSError: File b'../data/walmart-stores.csv' does not exist
</code></pre>
<p><strong>FOLDER STRUCTURE</strong></p>
<pre><code>Anconda
env
kmean.ipynb
data
walmart-stores.csv
(other folders [for anaconda env])
(other folders)
</code></pre>
<p><strong>QUESTION(S)</strong></p>
<ol>
<li>The error clearly states that the csv file cannot be found. I imagine it has to do with the project running in an Anaconda environment, but I thought this was the purpose of Anaconda environments in the first place. Am I wrong?</li>
<li>After answering the question, <strong>are there any other suggestions on how I should structure my Jupyter Notebooks when using Anaconda?</strong></li>
</ol>
<p><em>NOTES: I am new to python, anaconda, and jupyter notebooks so please disregard are naivety/stupidity. Thank you!</em></p> | <p>Fellow newbie here!
Try removing the "../" from your data location</p>
<p>Change</p>
<pre><code>data = pd.read_csv("../data/walmart-stores.csv")
</code></pre>
<p>to </p>
<pre><code>data = pd.read_csv("data/walmart-stores.csv")
</code></pre> | python|csv|pandas|matplotlib|jupyter-notebook | 0 |
39 | 44,144,538 | Find values in numpy array space-efficiently | <p>I am trying to create a copy of my numpy array that contains only certain values. This is the code I was using:</p>
<pre><code>A = np.array([[1,2,3],[4,5,6],[7,8,9]])
query_val = 5
B = (A == query_val) * np.array(query_val, dtype=np.uint16)
</code></pre>
<p>... which does exactly what I want.</p>
<p>Now, I'd like query_val to be more than just one value. The answer here: <a href="https://stackoverflow.com/questions/16343752/numpy-where-function-multiple-conditions">Numpy where function multiple conditions</a> suggests using a logical and operation, but that's very space inefficient because you use == several times, creating multiple intermediate results.</p>
<p>In my case, that means I don't have enough RAM to do it. Is there a way to do this properly in native numpy with minimal space overhead?</p> | <p>Here's one approach using <a href="https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.searchsorted.html" rel="nofollow noreferrer"><code>np.searchsorted</code></a> -</p>
<pre><code>def mask_in(a, b):
idx = np.searchsorted(b,a)
idx[idx==b.size] = 0
return np.where(b[idx]==a, a,0)
</code></pre>
<p>Sample run -</p>
<pre><code>In [356]: a
Out[356]:
array([[5, 1, 4],
[4, 5, 6],
[2, 4, 9]])
In [357]: b
Out[357]: array([2, 4, 5])
In [358]: mask_in(a,b)
Out[358]:
array([[5, 0, 4],
[4, 5, 0],
[2, 4, 0]])
</code></pre> | python|numpy | 0 |
40 | 69,371,270 | tensorflow.python.framework.errors_impl.AlreadyExistsError | <p>I trained a ImageClassifier model using <a href="https://teachablemachine.withgoogle.com/train/image" rel="nofollow noreferrer">Teachable Machine</a> and I tried to run the following code on VScode in python 3.8</p>
<pre><code>from keras.models import load_model
from PIL import Image, ImageOps
import numpy as np
# Load the model
model = load_model('keras_model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('1.jpeg')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
#turn the image into a numpy array
image_array = np.asarray(image)
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
print(prediction)
</code></pre>
<p>And I got the following errors</p>
<pre><code>2021-09-29 11:37:52.587380: E tensorflow/core/lib/monitoring/collection_registry.cc:77] Cannot register 2 metrics with the same name: /tensorflow/api/keras/dropout/temp_rate_is_zero
Traceback (most recent call last):
File "c:/Users/sumuk/OneDrive/Documents/ML/converted_keras/1.py", line 1, in <module>
from keras.models import load_model
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\__init__.py", line 25, in <module>
from keras import models
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\models.py", line 20, in <module>
from keras import metrics as metrics_module
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\metrics.py", line 26, in <module>
from keras import activations
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\activations.py", line 20, in <module>
from keras.layers import advanced_activations
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\layers\__init__.py", line 31, in <module>
from keras.layers.preprocessing.image_preprocessing import CenterCrop
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\layers\preprocessing\image_preprocessing.py",
line 24, in <module>
from keras.preprocessing import image as image_preprocessing
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\preprocessing\__init__.py", line 26, in <module>
from keras.utils import all_utils as utils
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\utils\all_utils.py", line 34, in <module>
from keras.utils.multi_gpu_utils import multi_gpu_model
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\utils\multi_gpu_utils.py", line 20, in <module>
from keras.layers.core import Lambda
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\layers\core\__init__.py", line 20, in <module> from keras.layers.core.dropout import Dropout
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\layers\core\dropout.py", line 26, in <module>
keras_temporary_dropout_rate = tf.__internal__.monitoring.BoolGauge(
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\monitoring.py", line 360, in __init__
super(BoolGauge, self).__init__('BoolGauge', _bool_gauge_methods,
File "C:\Users\sumuk\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\eager\monitoring.py", line 135, in __init__
self._metric = self._metric_methods[self._label_length].create(*args)
tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists.
</code></pre>
<p>Here is the <a href="https://drive.google.com/file/d/1V5Ivpwbka0wIC60IAxLzxeKWBx0TeHNb/view?usp=sharing" rel="nofollow noreferrer">model</a>,
I couldn't find any related solutions online, what should be done?
Thank you</p> | <p>To run this code , You need to use</p>
<pre><code>from tensorflow.keras.models import load_model
</code></pre>
<p>in place of</p>
<pre><code>from keras.models import load_model
</code></pre>
<p>This issue comes due to mismatch of <code>tensorflow</code> and <code>keras</code> version available in your system. Make sure you are using same version of <code>tensorflow</code> and <code>keras</code> or atleast latest <code>tensorflow 2.7</code> and try executing the same code again. Let us know if issue still persists.</p> | python|tensorflow|keras|image-classification | 0 |
41 | 69,414,137 | Parsing (from text) a table with two-row header | <p>I'm parsing the output of a .ipynb. The output was generated as plain text (using print) instead of a dataframe (not using print), in the spirit of:</p>
<pre><code>print( athletes.groupby('NOC').count() )
</code></pre>
<p>I came up with hacks (e.g. using <code>pandas.read_fwf()</code>) to the various cases, but I was wondering if anyone has an idea for a more elegant solution.</p>
<hr />
<p>It keeps nagging me that it's weird (bad design?) that the default print of a pandas.dataframe can't be parsed by pandas.</p>
<hr />
<p>EDIT: added more examples to the first table</p>
<p>Table 1</p>
<pre><code> Name Discipline
NOC
United States of America 615 615
Japan 586 586
Australia 470 470
People's Republic of China 401 401
Germany 400 400
</code></pre>
<p>Table 2</p>
<pre><code> Name
NOC Discipline
United States of America Athletics 144
Germany Athletics 95
Great Britain Athletics 75
Italy Athletics 73
Japan Athletics 70
Bermuda Triathlon 1
Libya Athletics 1
Palestine Athletics 1
San Marino Swimming 1
Kiribati Athletics 1
</code></pre>
<p>Table 3</p>
<pre><code> Name NOC Discipline
1410 CA Liliana Portugal Athletics
1411 CABAL Juan-Sebastian Colombia Tennis
1412 CABALLERO Denia Cuba Athletics
1413 CABANA PEREZ Cristina Spain Judo
1414 CABECINHA Ana Portugal Athletics
</code></pre> | <p>Assuming the following input:</p>
<pre><code>text = ''' Name Discipline
NOC
United States of America 615 615
Japan 586 586
Australia 470 470
People's Republic of China 401 401
Germany 400 400'''
</code></pre>
<p>You can use <code>pandas.read_csv</code> with the '\s\s+' separator:</p>
<pre><code>import pandas as pd
import io
df = pd.read_csv(io.StringIO(text), sep='\s\s+', engine='python')
</code></pre>
<p>Output:</p>
<pre><code>>>> df.index
Index(['United States of America', 'Japan', 'Australia',
'People's Republic of China', 'Germany'],
dtype='object', name='NOC')
>>> df.columns
Index(['Name', 'Discipline'], dtype='object')
>>> df
Name Discipline
NOC
United States of America 615 615
Japan 586 586
Australia 470 470
People's Republic of China 401 401
Germany 400 400
</code></pre> | python|pandas|jupyter-notebook | 1 |
42 | 69,488,329 | Apply fuzzy ratio to two dataframes | <p>I have two dataframes where <strong>I want to fuzzy string compare & apply my function to two dataframes</strong>:</p>
<pre><code>sample1 = pd.DataFrame(data1.sample(n=200, random_state=42))
sample2 = pd.DataFrame(data2.sample(n=200, random_state=13))
def get_ratio(row):
sample1 = row['address']
sample2 = row['address']
return fuzz.token_set_ratio(sample1, sample2)
match = data[data.apply(get_ratio, axis=1) >= 78] #I want to apply get_ratio to both sample1 and sample2
no_matched = data[data.apply(get_ratio, axis=1) <= 77] #I want to apply get_ratio to both sample1 and sample2
</code></pre>
<p><strong>Thanks in advance for your help!</strong></p> | <p>You need to create the permutations of your addresses. Then use that to compare the matching ones. You can find a similar question <a href="https://stackoverflow.com/questions/68978444/how-to-do-fuzzy-match-merge-to-match-based-on-a-few-columns/68979157#68979157">here</a>.</p>
<p>For your case first you need to create permutations:</p>
<pre><code>combs = list(itertools.product(data1["address"], data2["address"]))
combs = pd.DataFrame(combs)
</code></pre>
<p>Then use the proper method for matching:</p>
<pre><code>combs['score'] = combs.apply(lambda x: fuzz.token_set_ratio(x[0],x[1]), axis=1)
</code></pre>
<p>now based on the score you can find the ones that have matched or have not matched.</p>
<p>I advise you do try to group and clean the addresses first (i.e., lowering the case, removing the duplicates) Otherwise it might take a very long time to compute.</p> | python|python-3.x|pandas|function|fuzzywuzzy | 1 |
43 | 69,431,754 | How can I reshape a Mat to a tensor to use in deep neural network in c++? | <p>I want to deploy a trained deep neural network in c++ application. After reading image and using blobFromImage( I used opencv 4.4 ) function I received the blew error which is indicate I have problem with dimensions and shape of my tensor. The input of deep neural network is (h=150, w=100, channel=3). Is blobFromImage function the only way to make tensor? how can I fix this problem? Thanks in advance.
I put my code and the error.</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <vector>
int main() {
std::vector< cv::Mat > outs;
std::cout << "LOAD DNN in CPP Project!" << std::endl;
cv::Mat image = cv::imread("example.png",1/*, cv::IMREAD_GRAYSCALE*/);
cv::dnn::Net net;
net = cv::dnn::readNetFromONNX("model.onnx");
cv::Mat blob;
cv::dnn::blobFromImage(image, blob, 1/255, cv::Size(100,150), cv::Scalar(0,0,0), false,false);
net.setInput(blob);
net.forward( outs, "output");
return 0;
}
</code></pre>
<p>and the error is:</p>
<pre><code>global /home/hasa/opencv4.4/opencv-4.4.0/modules/dnn/src/dnn.cpp (3441) getLayerShapesRecursively OPENCV/DNN: [Convolution]:(model/vgg19/block1_conv1/BiasAdd:0): getMemoryShapes() throws exception. inputs=1 outputs=0/1 blobs=2
[ERROR:0] global /home/hasa/opencv4.4/opencv-4.4.0/modules/dnn/src/dnn.cpp (3447) getLayerShapesRecursively input[0] = [ 1 100 3 150 ]
[ERROR:0] global /home/hasa/opencv4.4/opencv-4.4.0/modules/dnn/src/dnn.cpp (3455) getLayerShapesRecursively blobs[0] = CV_32FC1 [ 64 3 3 3 ]
[ERROR:0] global /home/hasa/opencv4.4/opencv-4.4.0/modules/dnn/src/dnn.cpp (3455) getLayerShapesRecursively blobs[1] = CV_32FC1 [ 64 1 ]
[ERROR:0] global /home/hasa/opencv4.4/opencv-4.4.0/modules/dnn/src/dnn.cpp (3457) getLayerShapesRecursively Exception message: OpenCV(4.4.0) /home/hasa/opencv4.4/opencv- 4.4.0/modules/dnn/src/layers/convolution_layer.cpp:346: error: (-2:Unspecified error) Number of input channels should be multiple of 3 but got 100 in function 'getMemoryShapes'
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.4.0) /home/hasa/opencv4.4/opencv- 4.4.0/modules/dnn/src/layers/convolution_layer.cpp:346: error: (-2:Unspecified error) Number of input channels should be multiple of 3 but got 100 in function 'getMemoryShapes'
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
</code></pre> | <p>The following code works for me. The only difference is that I'm loading tensorflow model.</p>
<pre><code>inputNet = cv::dnn::readNetFromTensorflow(pbFilePath);
// load image of rowsxcols = 160x160
cv::Mat img, imgn, blob;
img = cv::imread("1.jpg");
//cv::cvtColor(img, img, CV_GRAY2RGB);// convert gray to color image
// normalize image (if needed)
//img.convertTo(imgn, CV_32FC3);//float32, 3channels (depends on your model)
//imgn = (imgn-127.5)/128.0;//normalized crop (in rgb)
//extract feature vector
cv::dnn::blobFromImage(imgn, blob, 1.0, cv::Size(160, 160),0, false, false);
inputNet.setInput(blob);
cv::Mat feature_vector = inputNet.forward();
</code></pre> | c++|tensorflow|opencv|deep-learning|neural-network | -1 |
44 | 69,318,826 | Tensorflow Object-API: convert ssd model to tflite and use it in python | <p>I have a hard time to convert a given tensorflow model into a tflite model and then use it. I already posted a <a href="https://stackoverflow.com/questions/69305190/object-detection-with-tflite">question</a> where I described my problem but didn't share the model I was working with, because I am not allowed to. Since I didn't find an answer this way, I tried to convert a public model (<a href="http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz" rel="nofollow noreferrer">ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu</a>).</p>
<p><a href="https://colab.research.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/convert_odt_model_to_TFLite.ipynb#scrollTo=TIY3cxDgsxuZ" rel="nofollow noreferrer">Here</a> is a colab tutorial from <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">the object detection api</a>. I just run the whole script without changes (its the same model) and downloaded the generated models (with and without metadata). I uploaded them <a href="https://drive.google.com/drive/folders/1dN7kGm_MLrq2riKk5h3fAUosaNuo32qY" rel="nofollow noreferrer">here</a> together with a sample picture from the coco17 train dataset.</p>
<p>I tried to use those models directly in python, but the results feel like garbage.</p>
<p>Here is the code I used, I followed this <a href="https://heartbeat.comet.ml/running-tensorflow-lite-object-detection-models-in-python-8a73b77e13f8" rel="nofollow noreferrer">guide</a>. I changed the indexes for rects, scores and classes because otherwise the results were not in the right format.</p>
<pre><code>#interpreter = tf.lite.Interpreter("original_models/model.tflite")
interpreter = tf.lite.Interpreter("original_models/model_with_metadata.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
size = 640
def draw_rect(image, box):
y_min = int(max(1, (box[0] * size)))
x_min = int(max(1, (box[1] * size)))
y_max = int(min(size, (box[2] * size)))
x_max = int(min(size, (box[3] * size)))
# draw a rectangle on the image
cv2.rectangle(image, (x_min, y_min), (x_max, y_max), (255, 255, 255), 2)
file = "images/000000000034.jpg"
img = cv2.imread(file)
new_img = cv2.resize(img, (size, size))
new_img = cv2.cvtColor(new_img, cv2.COLOR_BGR2RGB)
interpreter.set_tensor(input_details[0]['index'], [new_img.astype("f")])
interpreter.invoke()
rects = interpreter.get_tensor(
output_details[1]['index'])
scores = interpreter.get_tensor(
output_details[0]['index'])
classes = interpreter.get_tensor(
output_details[3]['index'])
for index, score in enumerate(scores[0]):
draw_rect(new_img,rects[0][index])
#print(rects[0][index])
print("scores: ",scores[0][index])
print("class id: ", classes[0][index])
print("______________________________")
cv2.imshow("image", new_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>This leads to the following console output</p>
<pre><code>scores: 0.20041436
class id: 51.0
______________________________
scores: 0.08925027
class id: 34.0
______________________________
scores: 0.079722285
class id: 34.0
______________________________
scores: 0.06676647
class id: 71.0
______________________________
scores: 0.06626186
class id: 15.0
______________________________
scores: 0.059938848
class id: 86.0
______________________________
scores: 0.058229476
class id: 34.0
______________________________
scores: 0.053791136
class id: 37.0
______________________________
scores: 0.053478718
class id: 15.0
______________________________
scores: 0.052847564
class id: 43.0
______________________________
</code></pre>
<p>and the resulting image</p>
<p><a href="https://i.stack.imgur.com/Fs99G.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fs99G.jpg" alt="model output" /></a>.</p>
<p>I tried different images from the orinal training dataset and never got good results. I think the output layer is broken or maybe some postprocessing is missing?</p>
<p>I also tried to use the converting method given from the <a href="https://www.tensorflow.org/lite/convert#convert_a_savedmodel_recommended_" rel="nofollow noreferrer">offical tensorflow documentaion</a>.</p>
<pre><code>import tensorflow as tf
saved_model_dir = 'tf_models/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/'
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>But when I try to use the model, I get a <code>ValueError: Cannot set tensor: Dimension mismatch. Got 640 but expected 1 for dimension 1 of input 0.</code></p>
<p>Has anyone an idea what I am doing wrong?</p>
<p><strong>Update:</strong> After Farmmakers advice, I tried changing the input dimensions of the model generating by the short script at the end. The shape before was:</p>
<pre><code>[{'name': 'serving_default_input_tensor:0',
'index': 0,
'shape': array([1, 1, 1, 3], dtype=int32),
'shape_signature': array([ 1, -1, -1, 3], dtype=int32),
'dtype': numpy.uint8,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
</code></pre>
<p>So adding one dimension would not be enough. Therefore I used <code>interpreter.resize_tensor_input(0, [1,640,640,3])</code> . Now it works to feed an image through the net.</p>
<p>Unfortunately I sill can't make any sense of the output. Here is the print of the output details:</p>
<pre><code>[{'name': 'StatefulPartitionedCall:6',
'index': 473,
'shape': array([ 1, 51150, 4], dtype=int32),
'shape_signature': array([ 1, 51150, 4], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:0',
'index': 2233,
'shape': array([1, 1], dtype=int32),
'shape_signature': array([ 1, -1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:5',
'index': 2198,
'shape': array([1], dtype=int32),
'shape_signature': array([1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:7',
'index': 493,
'shape': array([ 1, 51150, 91], dtype=int32),
'shape_signature': array([ 1, 51150, 91], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:1',
'index': 2286,
'shape': array([1, 1, 1], dtype=int32),
'shape_signature': array([ 1, -1, -1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:2',
'index': 2268,
'shape': array([1, 1], dtype=int32),
'shape_signature': array([ 1, -1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:4',
'index': 2215,
'shape': array([1, 1], dtype=int32),
'shape_signature': array([ 1, -1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
{'name': 'StatefulPartitionedCall:3',
'index': 2251,
'shape': array([1, 1, 1], dtype=int32),
'shape_signature': array([ 1, -1, -1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
</code></pre>
<p>I added the so generated tflite model to the <a href="https://drive.google.com/drive/folders/1dN7kGm_MLrq2riKk5h3fAUosaNuo32qY" rel="nofollow noreferrer">google drive</a>.</p>
<p><strong>Update2:</strong> I added a directory to the <a href="https://drive.google.com/drive/folders/1dN7kGm_MLrq2riKk5h3fAUosaNuo32qY" rel="nofollow noreferrer">google drive</a> which contains a notebook that uses the full size model and produces the correct output. If you execute the whole notebook it should produce the following image to your disk.</p>
<p><a href="https://i.stack.imgur.com/GFHaf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GFHaf.jpg" alt="enter image description here" /></a></p> | <p>For the models from Object Detection APIs to work well with TFLite, you have to convert it to TFLite-friendly graph that has custom op.</p>
<p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md</a></p>
<p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md" rel="nofollow noreferrer">(TF1 doc)</a></p>
<p>You can also try using <a href="https://www.tensorflow.org/lite/tutorials/model_maker_object_detection" rel="nofollow noreferrer">TensorFlow Lite Model Maker</a></p> | tensorflow|computer-vision|tensorflow-lite|object-detection-api|single-shot-detector | 1 |
45 | 53,834,223 | Comparing a `tf.constant` to an integer | <p>In TensorFlow, I have a <code>tf.while_loop</code>, where the <code>body</code> argument is defined as the following function:</p>
<pre><code>def loop_body(step_num, x):
if step_num == 0:
x += 1
else:
x += 2
step_num = tf.add(step_num, 1)
return step_num, x
</code></pre>
<p>The problem is that the line <code>step_num == 0</code> is never <code>True</code>, even though the initial value of <code>step_num</code> is <code>0</code>. I am assuming that this is because <code>step_num</code> is not an integer, but in fact, a <code>tf.constant</code> which was defined outside the loop: <code>step_num = tf.constant(0)</code>. So I am comparing a <code>tf.constant</code> to a Python integer, which will be <code>False</code>.</p>
<p>What should I use instead for this comparison?</p> | <p>First approach: using <code>tf.cond</code>:</p>
<pre><code>def loop_body(step_num, x):
x = tf.cond(tf.equal(step_num,0),lambda :x+1,lambda :x+2)
step_num = tf.add(step_num, 1)
return step_num, x
</code></pre>
<p>Second approach: using <code>autograph</code>:</p>
<pre><code>from tensorflow.contrib import autograph as ag
ag.to_graph(loop_body2)(step_num, x)
</code></pre>
<p>An example:</p>
<pre><code>import tensorflow as tf
from tensorflow.contrib import autograph as ag
def loop_body(step_num, x):
x = tf.cond(tf.equal(step_num,0),lambda :x+1,lambda :x+2)
step_num = tf.add(step_num, 1)
return step_num, x
def loop_body2(step_num, x):
if step_num == 0:
x += 1
else:
x += 2
step_num = tf.add(step_num, 1)
return step_num, x
step_num = tf.constant(0)
x = tf.constant(2)
result1 = loop_body(step_num, x)
result2 = ag.to_graph(loop_body2)(step_num, x)
with tf.Session() as sess:
print(sess.run(result1))
print(sess.run(result2))
#print
(1, 3)
(1, 3)
</code></pre> | python|tensorflow | 3 |
46 | 54,061,940 | How to match a column entry in pandas against another similar column entry in a different row? | <p>Say for a given table :</p>
<pre><code>d.DataFrame([['Johnny Depp', 'Keanu Reeves'],
['Robert De Niro', 'Nicolas Cage'],
['Brad Pitt', 'Johnny Depp'],
['Leonardo DiCaprio', 'Morgan Freeman'],
['Tom Cruise', 'Hugh Jackman'],
['Morgan Freeman', 'Robert De Niro']],
columns=['Name1', 'Name2'])
</code></pre>
<p>I wish the output as :</p>
<pre><code>pd.DataFrame([['Johnny Depp', 'Johnny Depp'],
['Robert De Niro', 'Robert De Niro'],
['Brad Pitt', NaN],
['Leonardo DiCaprio', NaN],
['Tom Cruise', NaN],
['Morgan Freeman', 'Morgan Freeman'],
[NaN ,'Keanu Reeves'],
[NaN ,'Nicolas Cage'],
[NaN ,'Hugh Jackman']],
columns=['Name1', 'Name2'])
</code></pre>
<p>I wish to map similar names in the two columns against each other, and the rest as seperate row entries.
I know Regex can solve this, but I want this at scale since I have a lot of rows. I tried using different inbuilt pandas functions and word libraries like FastText but couldn't solve this. </p>
<p>I wish to map column Name1 to Name2.</p>
<p>How do i solve this ? PS. I still think am making some silly errors.</p> | <p>First, you make a list with all the actors' names.</p>
<pre><code>actors = ['Johnny Depp', 'Keanu Reeves',
'Robert De Niro', 'Nicolas Cage',
'Brad Pitt', 'Johnny Depp',
'Leonardo DiCaprio', 'Morgan Freeman',
'Tom Cruise', 'Hugh Jackman',
'Morgan Freeman', 'Robert De Niro',
]
</code></pre>
<p>Then use the collections.Counter class. It is a powerful class which is used when we
want to find the frequency of an element.</p>
<pre><code>from collections import Counter
actors_counts = Counter(actors)
actors_list = list(actors_counts.items())
print(actors_list)
</code></pre>
<p>Then we make a pandas DataFrame,</p>
<pre><code>import pandas as pd
actors_df = pd.DataFrame(actors_list, columns=['Name','Frequency'])
print(actors_df)
</code></pre>
<p>It outputs, </p>
<pre><code> Name Frequency
0 Johnny Depp 2
1 Keanu Reeves 1
2 Robert De Niro 2
3 Nicolas Cage 1
4 Brad Pitt 1
5 Leonardo DiCaprio 1
6 Morgan Freeman 2
7 Tom Cruise 1
8 Hugh Jackman 1
</code></pre>
<p>I make a dict with keys the actos names and values the actor name of Nan string</p>
<pre><code>actors_dict = {}
for item in range(len(actors_df)):
name = str(actors_df['Name'].iloc[item])
freq = actors_df['Frequency'].iloc[item]
if freq>1:
actors_dict[name] = name
else:
actors_dict[name] = 'NaN'
</code></pre>
<p>The actors_dict is</p>
<pre><code>{'Johnny Depp': 'Johnny Depp',
'Keanu Reeves': 'NaN',
'Robert De Niro': 'Robert De Niro',
'Nicolas Cage': 'NaN',
'Brad Pitt': 'NaN',
'Leonardo DiCaprio': 'NaN',
'Morgan Freeman': 'Morgan Freeman',
'Tom Cruise': 'NaN',
'Hugh Jackman': 'NaN'}
</code></pre>
<p>Lastly, add the keys in a 'Name1' column and the values in a 'Name2' column of a DataFrame,</p>
<pre><code>a = list(actors_dict.keys())
b = list(actors_dict.values())
actors = pd.concat([pd.DataFrame([(a[i], b[i])], columns=['Name1', 'Name2']) for i in range(len(a))],ignore_index=True)
</code></pre>
<p>The output should be,</p>
<pre><code> Name1 Name2
0 Johnny Depp Johnny Depp
1 Keanu Reeves NaN
2 Robert De Niro Robert De Niro
3 Nicolas Cage NaN
4 Brad Pitt NaN
5 Leonardo DiCaprio NaN
6 Morgan Freeman Morgan Freeman
7 Tom Cruise NaN
8 Hugh Jackman NaN
</code></pre>
<p>I hope this helps you.</p> | python|pandas | 0 |
47 | 54,096,827 | ValueError: Plan shapes are not aligned | <p>I have four data frames that are importing data from different excel files ( Suppliers) and I am trying to combine these frames. When I include df3 when concatenating I get an error. I referred a lot of articles on similar error but not getting clue. </p>
<p>I tried upgrading pandas.
Tried the following code as well
Data = DataFrame([df1,df2,df3,df4],columns= 'Supplier','Entity','Address','Site','State','Waste Description','Quantity','UOM','Disposal Facility','Disposal Cost','Trans Cost']) </p>
<pre><code> df1 = data1[['Supplier','Entity','Address','Site','State','Waste Description','Quantity','UOM','Disposal Facility']]
Shape: (3377, 9)
df2 = data2[['Supplier','Entity','Address','Site','State','Waste Description','Quantity','UOM','unit price','Invoice Total','Disposal Facility']]
Shape:(13838, 11)
df3 = data3[['Supplier','Entity','Address','Site','State','Waste Description','Quantity','UOM','Disposal Facility']]
Shape:(1185, 10)
df4 = data4[['Supplier','Entity','Address','Site','State','Waste Description','Quantity','UOM','Disposal Facility','Disposal Cost','Trans Cost']]
Shape: (76, 11)
data = [df1,df2,df3,df4]
data1 = pd.concat(data)
ValueError: Plan shapes are not aligned
</code></pre>
<p>When I remove df3 the data gets combined. I read that number of columns between dataframe doesn't matter. </p> | <p>It worked after entering the following code</p>
<p>data3['Quantity'] = data3['Quantity'].replace(" ","")</p> | python|pandas | 0 |
48 | 53,866,744 | Weighted mean in pandas - string indices must be integers | <p>I am going to calculate weighted average based on csv file. I have already loaded columns: A, B which contains float values.
My csv file:</p>
<pre><code>A B
170.804 2854
140.924 510
164.842 3355
</code></pre>
<p>Pattern</p>
<pre><code>(w1*x1 + w2*x2 + ...) / (w1 + w2 + w3 + ...)
</code></pre>
<p>My code:</p>
<pre><code>c = df['B'] # ok
wa = (df['B'] * df['A']).sum() / df['B'].sum() # TypeError: string indices must be integers
</code></pre> | <p>IIUC, you might try this (the line of code you wrote should work as well):</p>
<pre><code>wa = df['A'].dot(df['B']) / df['B'].sum()
print(wa)
165.55897693109094
</code></pre> | python|pandas | 0 |
49 | 53,904,155 | Flexibly select pandas dataframe rows using dictionary | <p>Suppose I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'color':['red', 'green', 'blue'], 'brand':['Ford','fiat', 'opel'], 'year':[2016,2016,2017]})
brand color year
0 Ford red 2016
1 fiat green 2016
2 opel blue 2017
</code></pre>
<p>I know that to select using multiple columns I can do something like:</p>
<pre><code>new_df = df[(df['color']=='red')&(df['year']==2016)]
</code></pre>
<p>Now what I would like to do is find a way to use a dictionary to select the rows I want where the keys of the dictionary represent the columns mapping to the allowed values. For example applying the following dictionary <code>{'color':'red', 'year':2016}</code> on df would yield the same result as new_df. </p>
<p>I can already do it with a for loop, but I'd like to know if there are any <strong>faster</strong> and/or more '<strong>pythonic</strong>' ways of doing it!</p>
<p>Please include time taken of method.</p> | <p>With single expression:</p>
<pre><code>In [728]: df = pd.DataFrame({'color':['red', 'green', 'blue'], 'brand':['Ford','fiat', 'opel'], 'year':[2016,2016,2017]})
In [729]: d = {'color':'red', 'year':2016}
In [730]: df.loc[np.all(df[list(d)] == pd.Series(d), axis=1)]
Out[730]:
brand color year
0 Ford red 2016
</code></pre> | python|python-3.x|pandas|dataframe|select | 2 |
50 | 54,058,953 | Storing more than a million .txt files into a pandas dataframe | <p>I have a set of more than million records all of them in the <code>.txt</code> format. Each <code>file.txt</code> has just one line:</p>
<blockquote>
<p>'user_name', 'user_nickname', 24, 45</p>
</blockquote>
<p>I need to run a distribution check on the aggregated list of numeric features from the million files. Hence, I needed to aggregate these files into large data frame. The approach I have been following is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import glob
import os
import pandas as pd
import sqlite3
connex = sqlite3.connect("data/processed/aggregated-records.db")
files_lst = glob.glob("data/raw/*.txt")
files_read_count = 1
for file_name in files_lst:
data_df = pd.read_csv(file_name,
header=None,
names=['user_name', 'user_nickname',
'numeric_1', 'numeric_2'])
data_df['date_time'] = os.path.basename(file_name).strip(".txt")
data_df.to_sql(name=file_name, con=connex, if_exists="append", index=False)
files_read_count += 1
if (files_read_count % 10000) == 0:
print(files_read_count, " files read")
</code></pre>
<p>The issue I have is that with this approach, I am able to write to the database at a very slow pace (about 10,000 files in an hour). Is there any way to run this faster? </p> | <p>The following code cuts the processing time to 10,000 files a minute. This is an implementation of the suggestion from @DYZ <a href="https://stackoverflow.com/questions/54058953/storing-more-than-a-million-txt-files-into-a-pandas-dataframe?noredirect=1#comment94951786_54058953">here</a>.</p>
<pre><code>import csv, glob
with open('data/processed/aggregated-data.csv', 'w') as aggregated_csv_file:
writer = csv.writer(aggregated_csv_file, delimiter=',')
files_lst = glob.glob("data/raw/*.txt")
files_merged_count = 1
for file in files_lst:
with open(file) as input_file:
csv_reader = csv.reader(input_file, delimiter=',')
for row in csv_reader:
writer.writerow(row)
if (files_merged_count % 10000) == 0:
print(files_merged_count, "files merged")
files_merged_count += 1
</code></pre> | python|pandas|sqlite | 2 |
51 | 54,192,420 | How to use melt function in pandas for large table? | <p>I currently have data which looks like this: </p>
<pre><code> Afghanistan_co2 Afghanistan_income Year Afghanistan_population Albania_co2
1 NaN 603 1801 3280000 NaN
2 NaN 603 1802 3280000 NaN
3 NaN 603 1803 3280000 NaN
4 NaN 603 1804 3280000 NaN
</code></pre>
<p>and I would like to use melt to turn it into this: </p>
<p><img src="https://i.stack.imgur.com/3jUWa.png" alt="formatted data"></p>
<p>But with the labels instead as 'Year', 'Country', 'population Value',' co2 Value', 'income value'</p>
<p>It is a large dataset with many rows and columns, so I don't know what to do, I only have this so far: </p>
<pre><code>pd.melt(merged_countries_final, id_vars=['Year'])
</code></pre>
<p>I've done this since there does exist a column in the dataset titled 'Year'. </p>
<p>What should I do?</p> | <p>Just doing with <code>str.split</code> with your columns</p>
<pre><code>df.set_index('Year',inplace=True)
df.columns=pd.MultiIndex.from_tuples(df.columns.str.split('_').map(tuple))
df=df.stack(level=0).reset_index().rename(columns={'level_1':'Country'})
df
Year Country co2 income population
0 1801 Afghanistan NaN 603.0 3280000.0
1 1802 Afghanistan NaN 603.0 3280000.0
2 1803 Afghanistan NaN 603.0 3280000.0
3 1804 Afghanistan NaN 603.0 3280000.0
</code></pre> | python|pandas | 1 |
52 | 38,401,845 | Scipy.linalg.logm produces an error where matlab does not | <p>The line <code>scipy.linalg.logm(np.diag([-1.j, 1.j]))</code> produces an error with scipy 0.17.1, while the same call to matlab, <code>logm(diag([-i, i]))</code>, produces valid output. I already filed a <a href="https://github.com/scipy/scipy/issues/6378" rel="nofollow">bugreport on github</a>, now I am here to ask for a workaround. Is there any implementation of logm in Python, that can do <code>logm(np.diag([-1.j, 1.j]))</code>? </p>
<p>EDIT: The error is fixed in scipy 0.18.0rc2, so this thread is closed.</p> | <p>I don't know enough about the calculation to understand the error. But it has something to do division by zero - probably in the real part.</p>
<p>Replacing the zero real part of the array with a small value works:</p>
<pre><code>In [40]: linalg.logm(np.diag([1e-16-1.j,1e-16+1.j]))
Out[40]:
array([[ 5.00000000e-33-1.57079633j, 0.00000000e+00+0.j ],
[ 0.00000000e+00+0.j , 5.00000000e-33+1.57079633j]])
</code></pre>
<p>So the small real part could be removed with</p>
<pre><code>In [47]: linalg.logm(np.diag([1e-16-1.j,1e-16+1.j])).imag*1j
Out[47]:
array([[-0.-1.57079633j, 0.+0.j ],
[ 0.+0.j , 0.+1.57079633j]])
</code></pre> | python|matlab|numpy|scipy | 1 |
53 | 38,156,023 | Properly shifting irregular time series in Pandas | <p>What's the proper way to shift this time series, and re-align the data to the same index? E.g. How would I generate the data frame with the same index values as "data," but where the value at each point was the last value seen as of 0.4 seconds after the index timestamp?</p>
<p>I'd expect this to be a rather common operation among people dealing with irregular and mixed frequency time series ("what's the last value as of an arbitrary time offset to my current time?"), so I would expect (hope for?) this functionality to exist...</p>
<p>Suppose I have the following data frame:</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> import time
>>>
>>> x = np.arange(10)
>>> #t = time.time() + x + np.random.randn(10)
... t = np.array([1467421851418745856, 1467421852687532544, 1467421853288187136,
... 1467421854838806528, 1467421855148979456, 1467421856415879424,
... 1467421857259467264, 1467421858375025408, 1467421859019387904,
... 1467421860235784448])
>>> data = pd.DataFrame({"x": x})
>>> data.index = pd.to_datetime(t)
>>> data["orig_time"] = data.index
>>> data
x orig_time
2016-07-02 01:10:51.418745856 0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:52.687532544 1 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.288187136 2 2016-07-02 01:10:53.288187136
2016-07-02 01:10:54.838806528 3 2016-07-02 01:10:54.838806528
2016-07-02 01:10:55.148979456 4 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.415879424 5 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.259467264 6 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.375025408 7 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.019387904 8 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.235784448 9 2016-07-02 01:11:00.235784448
</code></pre>
<p>I can write the following function:</p>
<pre><code>def time_shift(df, delta):
"""Shift a DataFrame object such that each row contains the last known
value as of the time `df.index + delta`."""
lookup_index = df.index + delta
mapped_indicies = np.searchsorted(df.index, lookup_index, side='left')
# Clamp bounds to allow us to index into the original DataFrame
cleaned_indicies = np.clip(mapped_indicies, 0,
len(mapped_indicies) - 1)
# Since searchsorted gives us an insertion point, we'll generally
# have to shift back by one to get the last value prior to the
# insertion point. I choose to keep contemporaneous values,
# rather than looking back one, but that's a matter of personal
# preference.
lookback = np.where(lookup_index < df.index[cleaned_indicies], 1, 0)
# And remember to re-clip to avoid index errors...
cleaned_indicies = np.clip(cleaned_indicies - lookback, 0,
len(mapped_indicies) - 1)
new_df = df.iloc[cleaned_indicies]
# We don't know what the value was before the beginning...
new_df.iloc[lookup_index < df.index[0]] = np.NaN
# We don't know what the value was after the end...
new_df.iloc[mapped_indicies >= len(mapped_indicies)] = np.NaN
new_df.index = df.index
return new_df
</code></pre>
<p>with the desired behavior:</p>
<pre><code>>>> time_shift(data, pd.Timedelta('0.4s'))
x orig_time
2016-07-02 01:10:51.418745856 0.0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:52.687532544 1.0 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.288187136 2.0 2016-07-02 01:10:53.288187136
2016-07-02 01:10:54.838806528 4.0 2016-07-02 01:10:55.148979456
2016-07-02 01:10:55.148979456 4.0 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.415879424 5.0 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.259467264 6.0 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.375025408 7.0 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.019387904 8.0 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.235784448 NaN NaT
</code></pre>
<p>As you can see, getting this calculation right is a bit tricky, so I'd much prefer a supported implementation vs. 'rolling my own'.</p>
<p>This doesn't work. It shifts truncates the first argument and shifts all rows by 0 positions:</p>
<pre><code>>>> data.shift(0.4)
x orig_time
2016-07-02 01:10:51.418745856 0.0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:52.687532544 1.0 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.288187136 2.0 2016-07-02 01:10:53.288187136
2016-07-02 01:10:54.838806528 3.0 2016-07-02 01:10:54.838806528
2016-07-02 01:10:55.148979456 4.0 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.415879424 5.0 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.259467264 6.0 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.375025408 7.0 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.019387904 8.0 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.235784448 9.0 2016-07-02 01:11:00.235784448
</code></pre>
<p>This is just adds an offset to data.index...:</p>
<pre><code>>>> data.shift(1, pd.Timedelta("0.4s"))
x orig_time
2016-07-02 01:10:51.818745856 0 2016-07-02 01:10:51.418745856
2016-07-02 01:10:53.087532544 1 2016-07-02 01:10:52.687532544
2016-07-02 01:10:53.688187136 2 2016-07-02 01:10:53.288187136
2016-07-02 01:10:55.238806528 3 2016-07-02 01:10:54.838806528
2016-07-02 01:10:55.548979456 4 2016-07-02 01:10:55.148979456
2016-07-02 01:10:56.815879424 5 2016-07-02 01:10:56.415879424
2016-07-02 01:10:57.659467264 6 2016-07-02 01:10:57.259467264
2016-07-02 01:10:58.775025408 7 2016-07-02 01:10:58.375025408
2016-07-02 01:10:59.419387904 8 2016-07-02 01:10:59.019387904
2016-07-02 01:11:00.635784448 9 2016-07-02 01:11:00.235784448
</code></pre>
<p>And this results in Na's for all time points:</p>
<pre><code>>>> data.shift(1, pd.Timedelta("0.4s")).reindex(data.index)
x orig_time
2016-07-02 01:10:51.418745856 NaN NaT
2016-07-02 01:10:52.687532544 NaN NaT
2016-07-02 01:10:53.288187136 NaN NaT
2016-07-02 01:10:54.838806528 NaN NaT
2016-07-02 01:10:55.148979456 NaN NaT
2016-07-02 01:10:56.415879424 NaN NaT
2016-07-02 01:10:57.259467264 NaN NaT
2016-07-02 01:10:58.375025408 NaN NaT
2016-07-02 01:10:59.019387904 NaN NaT
2016-07-02 01:11:00.235784448 NaN NaT
</code></pre> | <p>Just like on <a href="https://stackoverflow.com/q/38131287/478288">this question</a>, you are asking for an asof-join. Fortunately, the next release of pandas (soon-ish) will have it! Until then, you can use a pandas Series to determine the value you want.</p>
<p>Original DataFrame:</p>
<pre><code>In [44]: data
Out[44]:
x
2016-07-02 13:27:05.249071616 0
2016-07-02 13:27:07.280549376 1
2016-07-02 13:27:08.666985984 2
2016-07-02 13:27:08.410521856 3
2016-07-02 13:27:09.896294912 4
2016-07-02 13:27:10.159203328 5
2016-07-02 13:27:10.492438784 6
2016-07-02 13:27:13.790925312 7
2016-07-02 13:27:13.896483072 8
2016-07-02 13:27:13.598456064 9
</code></pre>
<p>Convert to Series:</p>
<pre><code>In [45]: ser = pd.Series(data.x, data.index)
In [46]: ser
Out[46]:
2016-07-02 13:27:05.249071616 0
2016-07-02 13:27:07.280549376 1
2016-07-02 13:27:08.666985984 2
2016-07-02 13:27:08.410521856 3
2016-07-02 13:27:09.896294912 4
2016-07-02 13:27:10.159203328 5
2016-07-02 13:27:10.492438784 6
2016-07-02 13:27:13.790925312 7
2016-07-02 13:27:13.896483072 8
2016-07-02 13:27:13.598456064 9
Name: x, dtype: int64
</code></pre>
<p>Use the <code>asof</code> function:</p>
<pre><code>In [47]: ser.asof(ser.index + pd.Timedelta('4s'))
Out[47]:
2016-07-02 13:27:09.249071616 3
2016-07-02 13:27:11.280549376 6
2016-07-02 13:27:12.666985984 6
2016-07-02 13:27:12.410521856 6
2016-07-02 13:27:13.896294912 7
2016-07-02 13:27:14.159203328 9
2016-07-02 13:27:14.492438784 9
2016-07-02 13:27:17.790925312 9
2016-07-02 13:27:17.896483072 9
2016-07-02 13:27:17.598456064 9
Name: x, dtype: int64
</code></pre>
<p>(I used four seconds above to make the example easier to read.)</p> | python|pandas | 3 |
54 | 65,931,302 | I am trying to use CNN for stock price prediction but my code does not seem to work, what do I need to change or add? | <pre><code>import math
import numpy as np
import pandas as pd
import pandas_datareader as pdd
from sklearn.preprocessing import MinMaxScaler
from keras.layers import Dense, Dropout, Activation, LSTM, Convolution1D, MaxPooling1D, Flatten
from keras.models import Sequential
import matplotlib.pyplot as plt
df = pdd.DataReader('AAPL', data_source='yahoo', start='2012-01-01', end='2020-12-31')
data = df.filter(['Close'])
dataset = data.values
len(dataset)
# 2265
training_data_size = math.ceil(len(dataset)*0.7)
training_data_size
# 1586
scaler = MinMaxScaler(feature_range=(0,1))
scaled_data = scaler.fit_transform(dataset)
scaled_data
# array([[0.04288701],
# [0.03870297],
# [0.03786614],
# ...,
# [0.96610873],
# [0.98608785],
# [1. ]])
train_data = scaled_data[0:training_data_size,:]
x_train = []
y_train = []
for i in range(60, len(train_data)):
x_train.append(train_data[i-60:i, 0])
y_train.append(train_data[i,0])
if i<=60:
print(x_train)
print(y_train)
'''
[array([0.04288701, 0.03870297, 0.03786614, 0.0319038 , 0.0329498 ,
0.03577404, 0.03504182, 0.03608791, 0.03640171, 0.03493728,
0.03661088, 0.03566949, 0.03650625, 0.03368202, 0.03368202,
0.03598329, 0.04100416, 0.03953973, 0.04110879, 0.04320089,
0.04089962, 0.03985353, 0.04037657, 0.03566949, 0.03640171,
0.03619246, 0.03253139, 0.0294979 , 0.03033474, 0.02960253,
0.03002095, 0.03284518, 0.03357739, 0.03410044, 0.03368202,
0.03472803, 0.02803347, 0.02792885, 0.03556487, 0.03451886,
0.0319038 , 0.03127613, 0.03274063, 0.02688284, 0.02635988,
0.03211297, 0.03096233, 0.03472803, 0.03713392, 0.03451886,
0.03441423, 0.03493728, 0.03587866, 0.0332636 , 0.03117158,
0.02803347, 0.02897494, 0.03546024, 0.03786614, 0.0401674 ])]
[0.03933056376752886]
'''
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
x_train.shape
# (1526, 60, 1)
model = Sequential()
model.add(Convolution1D(64, 3, input_shape= (100,4), padding='same'))
model.add(MaxPooling1D(pool_size=2))
model.add(Convolution1D(32, 3, padding='same'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(1))
model.add(Activation('linear'))
model.summary()
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=50, epochs=50, validation_data = (X_test, y_test), verbose=2)
test_data = scaled_data[training_data_size-60: , :]
x_test = []
y_test = dataset[training_data_size: , :]
for i in range(60, len(test_data)):
x_test.append(test_data[i-60:i, 0])
x_test = np.array(x_test)
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
predictions = model.predict(x_test)
predictions = scaler.inverse_transform(predictions)
rsme = np.sqrt(np.mean((predictions - y_test)**2))
rsme
train = data[:training_data_size]
valid = data[training_data_size:]
valid['predictions'] = predictions
plt.figure(figsize=(16,8))
plt.title('PFE')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price in $', fontsize=18)
plt.plot(train['Close'])
plt.plot(valid[['Close', 'predictions']])
plt.legend(['Train', 'Val', 'predictions'], loc='lower right')
plt.show
import numpy as np
y_test, predictions = np.array(y_test), np.array(predictions)
mape = (np.mean(np.abs((predictions - y_test) / y_test))) * 100
accuracy = 100 - mape
print(accuracy)
</code></pre>
<p><strong>This above is my code. I tried to edit it but does not seem to be working. I am suspecting that I did not format my dataset well but I am new to this field so I do not know what should I do to my codes such that it will fit in. I hope you guys can enlighten me on this, Thank you!</strong></p>
<p><strong>I encountered errors like : ''IndexError: index 2264 is out of bounds for axis 0 with size 2264'' and
'' ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 800 but received input with shape [None, 480]''</strong></p> | <p>Your model doesn't tie to your data.</p>
<p>Change this line:</p>
<pre><code>model.add(Convolution1D(64, 3, input_shape= (60,1), padding='same'))
</code></pre> | python|tensorflow|keras|conv-neural-network | 0 |
55 | 65,950,088 | randomly choose different sets in numpy? | <p>I am trying to randomly select a set of integers in numpy and am encountering a strange error. If I define a numpy array with two sets of different sizes, <code>np.random.choice</code> chooses between them without issue:</p>
<pre><code>Set1 = np.array([[1, 2, 3], [2, 4]])
In: np.random.choice(Set1)
Out: [4, 5]
</code></pre>
<p>However, once the numpy array are sets of the same size, I get a value error:</p>
<pre><code>Set2 = np.array([[1, 3, 5], [2, 4, 6]])
In: np.random.choice(Set2)
ValueError: a must be 1-dimensional
</code></pre>
<p>Could be user error, but I've checked several times and the only difference is the size of the sets. I realize I can do something like:</p>
<pre><code>Chosen = np.random.choice(N, k)
Selection = Set[Chosen]
</code></pre>
<p>Where <code>N</code> is the number of sets and <code>k</code> is the number of samples, but I'm just wondering if there was a better way and specifically what I am doing wrong to raise a value error when the sets are the same size.</p>
<p>Printout of <code>Set1</code> and <code>Set2</code> for reference:</p>
<pre><code>In: Set1
Out: array([list([1, 3, 5]), list([2, 4])], dtype=object)
In: type(Set1)
Out: numpy.ndarray
In: Set2
Out:
array([[1, 3, 5],
[2, 4, 6]])
In: type(Set2)
Out: numpy.ndarray
</code></pre> | <p>Your issue is caused by a misunderstanding of how numpy arrays work. The first example can not "really" be turned into an array because numpy does not support ragged arrays. You end up with an array of object references that points to two python lists. The second example is a proper 2xN numerical array. I can think of two types of solutions here.</p>
<p>The obvious approach (which would work in both cases, by the way), would be to choose the index instead of the sublist. Since you are sampling with replacement, you can just generate the index and use it directly:</p>
<pre><code>Set[np.random.randint(N, size=k)]
</code></pre>
<p>This is the same as</p>
<pre><code>Set[np.random.choice(N, k)]
</code></pre>
<p>If you want to choose without replacement, your best bet is to use <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="nofollow noreferrer"><code>np.random.choice</code></a>, with <code>replace=False</code>. This is similar to, but less efficient than shuffling. In either case, you can write a one-liner for the index:</p>
<pre><code>Set[np.random.choice(N, k, replace=False)]
</code></pre>
<p>Or:</p>
<pre><code>index = np.arange(Set.shape[0])
np.random.shuffle(index)
Set[index[:k]]
</code></pre>
<p>The nice thing about <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.shuffle.html" rel="nofollow noreferrer"><code>np.random.shuffle</code></a>, though, is that you can apply it to <code>Set</code> directly, whether it is a one- or many-dimensional array. Shuffling will always happen along the first axis, so you can just take the top <code>k</code> elements afterwards:</p>
<pre><code>np.random.shuffle(Set)
Set[:k]
</code></pre>
<p>The shuffling operation works only in-place, so you have to write it out the long way. It's also less efficient for large arrays, since you have to create the entire range up front, no matter how small <code>k</code> is.</p>
<p>The other solution is to turn the second example into an array of list objects like the first one. I do not recommend this solution unless the <em>only</em> reason you are using numpy is for the <code>choice</code> function. In fact I wouldn't recommend it at all, since you can, and probably should, use pythons standard <a href="https://docs.python.org/3/library/random.html" rel="nofollow noreferrer"><code>random</code></a> module at this point. Disclaimers aside, you can coerce the datatype of the second array to be <code>object</code>. It will remove any benefits of using numpy, and can't be done directly. Simply setting <code>dtype=object</code> will still create a 2D array, but will store references to python <code>int</code> objects instead of primitives in it. You have to do something like this:</p>
<pre><code>Set = np.zeros(N, dtype=object)
Set[:] = [[1, 2, 3], [2, 4]]
</code></pre>
<p>You will now get an object essentially equivalent to the one in the first example, and can therefore apply <code>np.random.choice</code> directly.</p>
<p><strong>Note</strong></p>
<p>I show the legacy <a href="https://numpy.org/doc/stable/reference/random/index.html" rel="nofollow noreferrer"><code>np.random</code></a> methods here because of personal inertia if nothing else. The correct way, as suggested in the documentation I link to, is to use the new <a href="https://numpy.org/doc/stable/reference/random/generator.html" rel="nofollow noreferrer">Generator</a> API. This is especially true for the <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.choice.html" rel="nofollow noreferrer"><code>choice</code></a> method, which is much more efficient in the new implementation. The usage is not any more difficult:</p>
<pre><code>Set[np.random.default_rng().choice(N, k, replace=False)]
</code></pre>
<p>There are additional advantages, like the fact that you can now choose directly, even from a multidimensional array:</p>
<pre><code>np.random.default_rng().choice(Set2, k, replace=False)
</code></pre>
<p>The same goes for <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.shuffle.html" rel="nofollow noreferrer"><code>shuffle</code></a>, which, like <code>choice</code>, now allows you to select the axis you want to rearrange:</p>
<pre><code>np.random.default_rng().shuffle(Set)
Set[:k]
</code></pre> | python|numpy|sampling | 2 |
56 | 52,571,930 | Selecting vector of 2D array elements from column index vector | <p>I have a 2D array A:</p>
<pre><code>28 39 52
77 80 66
7 18 24
9 97 68
</code></pre>
<p>And a vector array of column indexes B:</p>
<pre><code>1
0
2
0
</code></pre>
<p>How, in a pythonian way, using base Python or Numpy, can I select the elements from A which DO NOT correspond to the column indexes in B?</p>
<p>I should get this 2D array which contains the elements of A, Not corresponding to the column indexes stored in B:</p>
<pre><code>28 52
80 66
7 18
97 68
</code></pre> | <p>You can make use of broadcasting and a row-wise mask to select elements not contained in your array for each row:</p>
<p><strong><em>Setup</em></strong></p>
<pre><code>B = np.array([1, 0, 2, 0])
cols = np.arange(A.shape[1])
</code></pre>
<hr>
<p>Now use broadcasting to create a mask, and index your array.</p>
<pre><code>mask = B[:, None] != cols
A[mask].reshape(-1, 2)
</code></pre>
<p></p>
<pre><code>array([[28, 52],
[80, 66],
[ 7, 18],
[97, 68]])
</code></pre> | python|arrays|numpy | 2 |
57 | 52,808,604 | subtracting strings in array of data python | <p>I am trying to do the following:</p>
<ol>
<li>create an array of random data</li>
<li>create an array of predefined codes (AW, SS)</li>
<li>subtract all numbers as well as any instance of predefined code. </li>
<li>if a string called "HL" remains after step 3, remove that as well and take the next alphabet pair. If a string called "HL" is the ONLY string in the array then take that.</li>
</ol>
<p>I do not know how to go about completing steps 3 - 4. </p>
<h1>1.</h1>
<pre><code>array_data = ['HL22','PG1234-332HL','1334-SF-21HL','HL43--222PG','HL222AW11144RH','HLSSDD','SSDD']
</code></pre>
<h1>2.</h1>
<pre><code>predefined_code = ['AW','SS']
</code></pre>
<h1>3.</h1>
<p>ideally, results for this step will look like </p>
<pre><code>result_data = [['HL'],['PG,HL'],['SF','HL'],['HL','PG'],['HL','RH'],
['HL','DD'],['DD']
</code></pre>
<h1>4. ideally, results for this step will look like this:</h1>
<pre><code>result_data = [['HL'],['PG'],['SF'],['PG'],['RH'], ['DD'],['DD']
</code></pre>
<p>for step 3, I have tried the following code </p>
<pre><code>not_in_predefined = [item for item in array_data if item not in predefined_code]
</code></pre>
<p>but this doesnt produce the result im looking for, because it it checking item against item. not a partial string match. </p> | <p>This is fairly simple using Regex.</p>
<p><code>re.findall(r'[A-Z].',item)</code> should give you the text from your strings, and then you can do the required processing on that.</p>
<p>You may want to convert the list to a set eventually and use the <code>difference</code> operation, instead of looping and removing the elements defined in the <code>predefined_code</code> list.</p> | python|arrays|regex|pandas|loops | 1 |
58 | 46,312,675 | What's the LSTM model's output_node_names? | <p>all. I want generate a freezed model from one LSTM model (<a href="https://github.com/roatienza/Deep-Learning-Experiments/tree/master/Experiments/Tensorflow/RNN" rel="nofollow noreferrer">https://github.com/roatienza/Deep-Learning-Experiments/tree/master/Experiments/Tensorflow/RNN</a>). In my option, I should freeze the last prediction node and use "bazel-bin/tensorflow/python/tools/freeze_graph --input_binary=true --input_graph=model_20170913/model.pb --input_checkpoint=model_20170913/model.ckpt --output_graph=model_20170913/frozen_graph.pb --output_node_names=ArgMax_52"(ArgMax_52 is last default node name). However, I got one notice "Converted 0 variables to const ops." (freeze command's result). Now, I have no idea about which node_name should be as output_node_name?</p> | <p>As mentioned above, "lstm_prediction" is output_node_name. And Tensorboard help me a lot to understand the graph.</p> | tensorflow|freeze|lstm | 0 |
59 | 46,209,772 | Discrepancy of the state of `numpy.random` disappears | <p>There are two python runs of the same project with different settings, but with the same random seeds.</p>
<p>The project contains a function that returns a couple of random numbers using <code>numpy.random.uniform</code>.</p>
<p>Regardless of other uses of <code>numpy.random</code> in the python process, series of the function calls in both of the runs generate the same sequences, until some point.</p>
<p>And after generating different results for one time at that point, they generate the same sequences again, for some period.</p>
<p>I haven't tried using <code>numpy.random.RandomState</code> yet, but how is this possible?</p>
<p>Is it just a coincidence that somewhere something which uses <code>numpy.random</code> caused the discrepancy and fixed it again?</p>
<p>I'm curious if it is the only possibility or there is another explanation.</p>
<p>Thanks in advance.</p>
<p>ADD: I forgot to mention that there was no seeding at that point.</p> | <p>When you use the <code>random</code> module in numpy, each randomly generated number (regardless of the distribution/function) uses the same "global" instance of <code>RandomState</code>. When you set the seed using <code>numpy.random.seed()</code>, you set the seed of the 'global' instance of <code>RandomState</code>. This is the same principle as the <code>random</code> library in Python.</p>
<p>I'm not sure of the specific implementation of the numpy random functions, but I suspect that each random function will make the underlying Mersenne Twister advance a number of 'steps', with the number of steps not necessarily being the same between different <code>random</code> functions.</p>
<p>So, if the order of <em>every</em> call to a <code>random</code> function is not the same between separate runs, then you may see divergence in the generated sequence of random numbers, with convergence again if the Mersenne Twister 'steps' line up again.</p>
<p>You could get around this by initialising a separate <code>RandomState</code> instance for each function you are using. For example:</p>
<pre><code>import numpy as np
seed = 12345
r_uniform = np.random.RandomState(seed)
r_randint = np.random.RandomState(seed)
a_random_uniform_number = r_uniform.uniform()
a_random_int = r_randint.randint(10)
</code></pre>
<p>You might want to set different seeds for each instance - this will depend on what you are using these pseudo-random numbers for.</p> | python|numpy-random | 0 |
60 | 68,965,218 | Remove duplicate strings within a pandas dataframe entry | <p>I need to remove duplicate strings within a pandas dataframe entry. But Im only find solutions for removing duplicate rows.</p>
<p>The entries I want to clean look like this:</p>
<p><a href="https://i.stack.imgur.com/JZhtQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JZhtQ.png" alt="enter image description here" /></a></p>
<p>Dataframe looks like this:</p>
<p><a href="https://i.stack.imgur.com/FuVC1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FuVC1.png" alt="enter image description here" /></a></p>
<p>I want each string between the commas to occur only once.
Can someone please help me?</p> | <p>Try this (I've added a simple example of my own df):</p>
<pre><code>import pandas as pd
data = ['a,b,c','a,b,b,e,d','a,a,e,d,f']
df = pd.DataFrame(data,columns={"cleaned_data"})
def remove_dups_letters(row):
sentences = set(row.split(","))
new_str = ','.join(sentences)
return new_str
df['cleaned_data'] = df['cleaned_data'].apply(remove_dups_letters)
print(df)
</code></pre> | python|pandas|dataframe | 1 |
61 | 69,254,771 | Parallelize a function with multiple inputs/outputs geodataframe-variables | <p>Using a previous answer (merci Booboo),
The code idea is:</p>
<pre><code>from multiprocessing import Pool
def worker_1(x, y, z):
...
t = zip(list_of_Polygon,list_of_Point,column_Point)
return t
def collected_result(t):
x, y, z = t # unpack
save_shp("polys.shp",x)
save_shp("point.shp",y,z)
if __name__ == '__main__':
gg = gpd.read_file("name.shp")
pool = Pool()
for index, pol in gg.iterrows():
xlon ,ylat = gg.centroid
result = pool.starmap(worker_1, zip(pol,xlon,ylat))
# or
# result = mp.Process(worker_1,args = (pol,xlon,ylat))
pool.close()
pool.join()
collected_result(result)
</code></pre>
<p>But the geodataframe (Polygon,Point) is not iterable so I can't use pool, any suggestions to parallelize?</p>
<p>How to compress the (geodataframe) outputs in worker_1 and then save them independently (or multiple layers in a shapefile), its better to use global parameters? ... because zip only saves lists (right*)?</p> | <p>Well, if I understand what you are trying to do, perhaps the following is what you need. Here I am building up the <code>args</code> list that will be used as the <em>iterable</em> argument to <code>starmap</code> by iterating on <code>gg.iterrows()</code> (there is no need to use <code>zip</code>):</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool
def worker_1(pol, xlon, ylat):
...
t = zip(list_of_Polygon, list_of_Point, column_Point)
return t
def collected_result(t):
x, y, z = t # unpack
save_shp("polys.shp", x)
save_shp("point.shp", y, z)
if __name__ == '__main__':
gg = gpd.read_file("name.shp")
pool = Pool()
args = []
for index, pol in gg.iterrows():
xlon, ylat = gg.centroid
args.append((pol, xlon, ylat))
result = pool.starmap(worker_1, args)
pool.close()
pool.join()
collected_result(result)
</code></pre>
<p>You were creating a single <code>Pool</code> instance and in your loop doing repeatedly calls to methods <code>starmap</code>, <code>close</code> and <code>join</code>. But once you call <code>close</code> on the <code>Pool</code> instance you cannot submit any more tasks to the pool (i.e. call <code>starmap</code> again), so I think your looping/indentation was all wrong.</p> | python|parallel-processing|geopandas|pool|shapely | 0 |
62 | 69,229,971 | Arange ordinal number for range of values in column | <p>So I have some kind of data frame which, and in one column values range from 139 to 150 (rows with values repeat). How to create new column, which will assign ordinal value based on the mentioned column? For example, 139 -> 0, 140 -> 1, ..., 150 -> 10</p>
<p>UPD: Mozway's answer is suitable, thanks!</p> | <p>Simply subtract 139: <code> df['col'] -= 139</code></p>
<p>Or, to get a new column: <code>df['new'] = df['col'] - 139</code></p> | python|pandas|dataframe | 1 |
63 | 60,948,086 | Creating a function that operates different string cleaning operations | <p>I built a function that performs multiple cleaning operations, but when I run it on an object column, I get the AttributeError: 'str' object has no attribute 'str' error. Why is that?</p>
<pre><code>news = {'Text':['bNikeb invests in shoes', 'bAdidasb invests in t-shirts', 'dog drank water'], 'Source':['NYT', 'WP', 'Guardian']}
news_df = pd.DataFrame(news)
def string_cleaner(x):
x = x.str.strip()
x = x.str.replace('.', '')
x = x.str.replace(' ', '')
news_df['clean'] = news_df['Text'].apply(string_cleaner)
</code></pre> | <pre><code>news = {'Text':['bNikeb invests in shoes', 'bAdidasb invests in t-shirts', 'dog drank water'], 'Source':['NYT', 'WP', 'Guardian']}
news_df = pd.DataFrame(news)
def string_cleaner(x):
x = x.strip()
x = x.replace('.', '')
x = x.replace(' ', '')
return x
news_df['clean'] = news_df['Text'].apply(string_cleaner)
</code></pre>
<p><code>apply</code> is used to apply a function on a pandas Series objects, the final return type is inferred from the return type of the applied function. So, you can think of passing a list of values to a function one at a time to transform those values, in your case you are sending a list of string to clean each string.</p>
<p>As, x is a string, the operations you're applying (strip, replace) works directly, there's no .str operation on python strings. So, it gives an error. There is a str function which is used this way str(x) to cast another python type to a string. </p> | python|string|pandas|function | 1 |
64 | 60,820,941 | How to break down a numpy array into a list and create a dictionary? | <p>I have a following list and a numpy array :
For the list :</p>
<pre><code>features = np.array(X_train.columns).tolist()
results :
['Attr1', 'Attr2', 'Attr3', 'Attr4', 'Attr5', 'Attr6', 'Attr7', 'Attr8', 'Attr9', 'Attr10', 'Attr11', 'Attr12', 'Attr13', 'Attr14', 'Attr15', 'Attr16', 'Attr17', 'Attr18', 'Attr19', 'Attr20', 'Attr21', 'Attr22', 'Attr23', 'Attr24', 'Attr25', 'Attr26', 'Attr27', 'Attr28', 'Attr29', 'Attr30', 'Attr31', 'Attr32', 'Attr33', 'Attr34', 'Attr35', 'Attr36', 'Attr37', 'Attr38', 'Attr39', 'Attr40', 'Attr41', 'Attr42', 'Attr43', 'Attr44', 'Attr45', 'Attr46', 'Attr47', 'Attr48', 'Attr49', 'Attr50', 'Attr51', 'Attr52', 'Attr53', 'Attr54', 'Attr55', 'Attr56', 'Attr57', 'Attr58', 'Attr59', 'Attr60', 'Attr61', 'Attr62', 'Attr63', 'Attr64']
</code></pre>
<p>and array name ab</p>
<pre><code>aa=(lr.coef_) #I put a regression result on numpy array so I can split them, I want to put them as a list
ab=np.split(aa,len(aa))
results :
[array([[ 0.04181571, 0.62369216, -0.23559375, 0.78663624, -0.13935947,
-0.1118698 , -0.05672835, -1.73851643, -0.42134655, 0.79001534,
0.05048936, -0.09287526, 0.10103251, -0.0587092 , -0.05300849,
0.72827807, 1.15870475, -0.13861187, -0.42572654, 0.19369654,
-0.33319238, -0.06805035, 0.14067888, -0.07418516, -0.04400882,
-0.78701564, -0.10921816, -0.26166642, 0.06800944, 0.07672145,
0.22109349, -0.15389544, 2.41697614, 0.21749429, -0.0766771 ,
0.77580103, 0.04128744, -0.92835969, -0.41802274, 0.89865658,
-0.12102089, -0.28887104, 0.10421332, 0.14445757, 0.02719274,
-1.73622976, -0.34980593, 0.35199196, 0.56110135, 0.4460968 ,
-1.13265322, 0.26188587, 0.14336352, 0.2341355 , -0.10077637,
0.43080231, -0.05521557, -0.1996818 , 0.00513076, -0.14477274,
0.04712721, 0.15380395, -2.51974007, -0.03988658]])]
</code></pre>
<p>Now, I want to make a dictionary for them but here I'm confused of how should I turn the array into list.</p>
<p>This is what I've done :</p>
<pre><code>for x in features :
for y in ab:
print({x:y})
and the result is not as desired, since it's failed to break down the array :
{'Attr1': array([[ 0.04181571, 0.62369216, -0.23559375, 0.78663624, -0.13935947,
-0.1118698 , -0.05672835, -1.73851643, -0.42134655, 0.79001534,
0.05048936, -0.09287526, 0.10103251, -0.0587092 , -0.05300849,
0.72827807, 1.15870475, -0.13861187, -0.42572654, 0.19369654,
-0.33319238, -0.06805035, 0.14067888, -0.07418516, -0.04400882,
-0.78701564, -0.10921816, -0.26166642, 0.06800944, 0.07672145,
0.22109349, -0.15389544, 2.41697614, 0.21749429, -0.0766771 ,
0.77580103, 0.04128744, -0.92835969, -0.41802274, 0.89865658,
-0.12102089, -0.28887104, 0.10421332, 0.14445757, 0.02719274,
-1.73622976, -0.34980593, 0.35199196, 0.56110135, 0.4460968 ,
-1.13265322, 0.26188587, 0.14336352, 0.2341355 , -0.10077637,
0.43080231, -0.05521557, -0.1996818 , 0.00513076, -0.14477274,
0.04712721, 0.15380395, -2.51974007, -0.03988658]])}
{'Attr2': array([[ 0.04181571, 0.62369216, -0.23559375, 0.78663624, -0.13935947,
-0.1118698 , -0.05672835, -1.73851643, -0.42134655, 0.79001534,
0.05048936, -0.09287526, 0.10103251, -0.0587092 , -0.05300849,
0.72827807, 1.15870475, -0.13861187, -0.42572654, 0.19369654,
-0.33319238, -0.06805035, 0.14067888, -0.07418516, -0.04400882,
-0.78701564, -0.10921816, -0.26166642, 0.06800944, 0.07672145,
0.22109349, -0.15389544, 2.41697614, 0.21749429, -0.0766771 ,
0.77580103, 0.04128744, -0.92835969, -0.41802274, 0.89865658,
-0.12102089, -0.28887104, 0.10421332, 0.14445757, 0.02719274,
-1.73622976, -0.34980593, 0.35199196, 0.56110135, 0.4460968 ,
-1.13265322, 0.26188587, 0.14336352, 0.2341355 , -0.10077637,
0.43080231, -0.05521557, -0.1996818 , 0.00513076, -0.14477274,
0.04712721, 0.15380395, -2.51974007, -0.03988658]])}
{'Attr3': array([[ 0.04181571, 0.62369216, -0.23559375, 0.78663624, -0.13935947,
-0.1118698 , -0.05672835, -1.73851643, -0.42134655, 0.79001534,
0.05048936, -0.09287526, 0.10103251, -0.0587092 , -0.05300849,
0.72827807, 1.15870475, -0.13861187, -0.42572654, 0.19369654,
-0.33319238, -0.06805035, 0.14067888, -0.07418516, -0.04400882,
-0.78701564, -0.10921816, -0.26166642, 0.06800944, 0.07672145,
0.22109349, -0.15389544, 2.41697614, 0.21749429, -0.0766771 ,
0.77580103, 0.04128744, -0.92835969, -0.41802274, 0.89865658,
-0.12102089, -0.28887104, 0.10421332, 0.14445757, 0.02719274,
-1.73622976, -0.34980593, 0.35199196, 0.56110135, 0.4460968 ,
-1.13265322, 0.26188587, 0.14336352, 0.2341355 , -0.10077637,
0.43080231, -0.05521557, -0.1996818 , 0.00513076, -0.14477274,
0.04712721, 0.15380395, -2.51974007, -0.03988658]])}.......
</code></pre>
<p>Could you help me to build a list for <code>ab</code> array?
And how should I turn them into dictionary?</p>
<pre><code>Th expected results : {[Attr1 : 0.04181571], Attr2 : 0.623692160, and so on...}
</code></pre>
<p>Thank you very much!</p> | <p>you could use the built-in function <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer">zip</a> :</p>
<pre><code>dict(zip(features, ab[0].ravel()))
</code></pre>
<p>you can check the docs for <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html#numpy.ravel" rel="nofollow noreferrer">numpy.ravel</a></p>
<blockquote>
<p>Return a contiguous flattened array.</p>
<p>A 1-D array, containing the elements of the input, is returned.</p>
</blockquote>
<hr>
<p>since your <code>ab</code> variable is obtained with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html#numpy-split" rel="nofollow noreferrer">numpy.split</a> <code>ab</code> is a list with one numpy array as you showed</p> | python|numpy | 1 |
65 | 71,476,405 | Mapping values from one Dataframe to another and updating existing column | <p>I have a dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>Name</th>
<th>Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>Mary</td>
<td>10</td>
</tr>
<tr>
<td>3</td>
<td>Tom</td>
<td>9</td>
</tr>
<tr>
<td>4</td>
<td></td>
<td>8</td>
</tr>
<tr>
<td>5</td>
<td></td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>And another dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>Jerry</td>
</tr>
<tr>
<td>5</td>
<td>Pat</td>
</tr>
</tbody>
</table>
</div>
<p>And I want a resulting dataframe like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>Name</th>
<th>Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>John</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>Mary</td>
<td>10</td>
</tr>
<tr>
<td>3</td>
<td>Tom</td>
<td>9</td>
</tr>
<tr>
<td>4</td>
<td>Jerry</td>
<td>8</td>
</tr>
<tr>
<td>5</td>
<td>Pat</td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>Is there a way to do it in Python?</p> | <p>Does this suffice:</p>
<pre class="lang-py prettyprint-override"><code>df1.set_index('Id').fillna({'Name' : df2.set_index('Id').Name}).reset_index()
Id Name Score
0 1 John 10
1 2 Mary 10
2 3 Tom 9
3 4 Jerry 8
4 5 Pat 7
</code></pre> | python|pandas | 0 |
66 | 43,086,557 | Convolve2d just by using Numpy | <p>I am studying image-processing using NumPy and facing a problem with filtering with convolution.</p>
<p><strong>I would like to convolve a gray-scale image. (convolve a 2d Array with a smaller 2d Array)</strong></p>
<p>Does anyone have an idea to <strong>refine</strong> my method?</p>
<p>I know that <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html" rel="nofollow noreferrer">SciPy</a> supports convolve2d but I want to make a convolve2d only by using NumPy.</p>
<h1>What I have done</h1>
<p>First, I made a 2d array the submatrices.</p>
<pre><code>a = np.arange(25).reshape(5,5) # original matrix
submatrices = np.array([
[a[:-2,:-2], a[:-2,1:-1], a[:-2,2:]],
[a[1:-1,:-2], a[1:-1,1:-1], a[1:-1,2:]],
[a[2:,:-2], a[2:,1:-1], a[2:,2:]]])
</code></pre>
<p>the submatrices seems complicated but what I am doing is shown in the following drawing.</p>
<p><a href="https://i.stack.imgur.com/VLRnQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VLRnQ.png" alt="submatrices" /></a></p>
<p>Next, I multiplied each submatrices with a filter.</p>
<pre><code>conv_filter = np.array([[0,-1,0],[-1,4,-1],[0,-1,0]])
multiplied_subs = np.einsum('ij,ijkl->ijkl',conv_filter,submatrices)
</code></pre>
<p><a href="https://i.stack.imgur.com/lh8Ym.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lh8Ym.png" alt="multiplied_subs" /></a></p>
<p>and summed them.</p>
<pre><code>np.sum(np.sum(multiplied_subs, axis = -3), axis = -3)
#array([[ 6, 7, 8],
# [11, 12, 13],
# [16, 17, 18]])
</code></pre>
<p>Thus this procedure can be called my convolve2d.</p>
<pre><code>def my_convolve2d(a, conv_filter):
submatrices = np.array([
[a[:-2,:-2], a[:-2,1:-1], a[:-2,2:]],
[a[1:-1,:-2], a[1:-1,1:-1], a[1:-1,2:]],
[a[2:,:-2], a[2:,1:-1], a[2:,2:]]])
multiplied_subs = np.einsum('ij,ijkl->ijkl',conv_filter,submatrices)
return np.sum(np.sum(multiplied_subs, axis = -3), axis = -3)
</code></pre>
<p>However, I find this my_convolve2d troublesome for 3 reasons.</p>
<ol>
<li>Generation of the submatrices is too awkward that is difficult to read and can only be used when the filter is 3*3</li>
<li>The size of the variant submatrices seems to be too big, since it is approximately 9 folds bigger than the original matrix.</li>
<li>The summing seems a little non intuitive. Simply said, ugly.</li>
</ol>
<p>Thank you for reading this far.</p>
<p>Kind of update. I wrote a conv3d for myself. I will leave this as a public domain.</p>
<pre><code>def convolve3d(img, kernel):
# calc the size of the array of submatrices
sub_shape = tuple(np.subtract(img.shape, kernel.shape) + 1)
# alias for the function
strd = np.lib.stride_tricks.as_strided
# make an array of submatrices
submatrices = strd(img,kernel.shape + sub_shape,img.strides * 2)
# sum the submatrices and kernel
convolved_matrix = np.einsum('hij,hijklm->klm', kernel, submatrices)
return convolved_matrix
</code></pre> | <p>You could generate the subarrays using <a href="https://stackoverflow.com/questions/19414673/in-numpy-how-to-efficiently-list-all-fixed-size-submatrices"><code>as_strided</code></a>:</p>
<pre><code>import numpy as np
a = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
sub_shape = (3,3)
view_shape = tuple(np.subtract(a.shape, sub_shape) + 1) + sub_shape
strides = a.strides + a.strides
sub_matrices = np.lib.stride_tricks.as_strided(a,view_shape,strides)
</code></pre>
<p>To get rid of your second "ugly" sum, alter your <code>einsum</code> so that the output array only has <code>j</code> and <code>k</code>. This implies your second summation.</p>
<pre><code>conv_filter = np.array([[0,-1,0],[-1,5,-1],[0,-1,0]])
m = np.einsum('ij,ijkl->kl',conv_filter,sub_matrices)
# [[ 6 7 8]
# [11 12 13]
# [16 17 18]]
</code></pre> | python|numpy|image-processing|matrix|convolution | 34 |
67 | 72,426,602 | Concat two values into string Pandas? | <p>I tried to concat two values of two columns in Pandas like this:</p>
<pre><code>new_dfr["MMYY"] = new_dfr["MM"]+new_dfr["YY"]
</code></pre>
<p>I got warning message:</p>
<pre><code>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
new_dfr["MMYY"] = new_dfr["MM"]+new_dfr["YY"]
</code></pre>
<p>How to fix it?</p> | <pre><code>new_dfr["MMYY"] = new_dfr["MM"].astype(str) + new_dfr["YY"].astype(str)
</code></pre> | pandas | 0 |
68 | 72,413,201 | Is there a faster way to do this loop? | <p>I want to create a new column using the following loop. The table just has the columns 'open', and 'start'. I want to create a new column 'startopen', where if 'start' equals 1, then 'startopen' is equal to 'open'. Otherwise, 'startopen' is equal to whatever 'startopen' was in the row above of this newly created column. Currently I'm able to achieve this using the following:</p>
<pre><code>for i in range(df.shape[0]):
if df['start'].iloc[i] == 1:
df.loc[df.index[i],'startopen'] = df.loc[df.index[i],'open']
else:
df.loc[df.index[i],'startopen'] = df.loc[df.index[i-1],'startopen']
</code></pre>
<p>This works, but is very slow for large datasets. Are there any built in functions that can do this faster?</p> | <blockquote>
<p>I want to create a new column 'startopen', where if 'start' equals 1, then 'startopen' is equal to 'open'</p>
<p>Otherwise, 'startopen' is equal to whatever 'startopen' was in the row above of this newly created column.</p>
</blockquote>
<p>IIUC, otherwise part is equal to forward fill the not 1 <code>startopen</code> with last equal 1 <code>startopen</code></p>
<pre class="lang-py prettyprint-override"><code>df['startopen'] = pd.Series(np.where(df['start'].eq(1), df['open'], np.nan), index=df.index).ffill()
</code></pre> | python|pandas | 2 |
69 | 72,360,949 | Aggregating multiple columns Pandas | <p>Currently my csv looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>title</th>
<th>field1</th>
<th>field2</th>
<th>field3</th>
<th>field4</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>A1</td>
<td>A11</td>
<td>553</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>A1</td>
<td>A12</td>
<td>94</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>A1</td>
<td>A13</td>
<td>30</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>A1</td>
<td>{n/a}</td>
<td>0</td>
<td>9586</td>
</tr>
<tr>
<td>A</td>
<td>A2</td>
<td>A21</td>
<td>200</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>A2</td>
<td>{n/a}</td>
<td>0</td>
<td>3950</td>
</tr>
<tr>
<td>A</td>
<td>A3</td>
<td>A31</td>
<td>35</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>A3</td>
<td>{n/a}</td>
<td>0</td>
<td>2929</td>
</tr>
</tbody>
</table>
</div>
<p>But I am wanting it to look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>title</th>
<th>field1</th>
<th>field2</th>
<th>field3</th>
<th>field4</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>A1</td>
<td>A11</td>
<td>553</td>
<td>9586</td>
</tr>
<tr>
<td>A</td>
<td>A1</td>
<td>A12</td>
<td>94</td>
<td>9586</td>
</tr>
<tr>
<td>A</td>
<td>A1</td>
<td>A13</td>
<td>30</td>
<td>9586</td>
</tr>
<tr>
<td>A</td>
<td>A2</td>
<td>A21</td>
<td>200</td>
<td>3950</td>
</tr>
<tr>
<td>A</td>
<td>A3</td>
<td>A31</td>
<td>35</td>
<td>2929</td>
</tr>
</tbody>
</table>
</div>
<p>This is my code:</p>
<pre><code>def fun(df, cols_to_aggregate, cols_order):
df = df.groupby(['field1', 'field2'], as_index=False)\
.agg(cols_to_aggregate)
df['title'] = 'A'
df = df[cols_order]
return df
def create_csv(df, month_date):
cols_to_aggregate = {'field3': 'sum', 'field4': 'sum'}
cols_order = ['title', 'field1', 'field2', 'field3']
funCSV = fun(df, cols_to_aggregate, cols_order)
return funCSV
</code></pre>
<p>Any help would be appreciated as I can't figure out how to match field4 to all of the relevant field2's.</p> | <p>Use:</p>
<pre><code>def fun(df, cols_to_aggregate, cols_order):
df = df.groupby(['field1', 'field2'], as_index=False)\
.agg(cols_to_aggregate)
df['title'] = 'A'
#aggregate field4 to new column
df['field4'] = df.groupby('field1')['field4'].transform('sum')
df = df[cols_order]
return df
def create_csv(df, month_date):
cols_to_aggregate = {'field3': 'sum', 'field4': 'sum'}
#aded value 'field4'
cols_order = ['title', 'field1', 'field2', 'field3','field4']
funCSV = fun(df, cols_to_aggregate, cols_order)
return funCSV
print (create_csv(df, '2015-01').loc[lambda x: x['field2'].ne('{n/a}')])
title field1 field2 field3 field4
0 A A1 A11 553 9586
1 A A1 A12 94 9586
2 A A1 A13 30 9586
4 A A2 A21 200 3950
6 A A3 A31 35 2929
</code></pre>
<p>Or if need first non <code>0</code> value per <code>field1</code> use:</p>
<pre><code>def fun(df, cols_to_aggregate, cols_order):
df = df.groupby(['field1', 'field2'], as_index=False)\
.agg(cols_to_aggregate)
df['title'] = 'A'
df['field4'] = df.groupby('field1')['field4'].transform('first')
df = df[cols_order]
return df
def create_csv(df, month_date):
cols_to_aggregate = {'field3': 'sum', 'field4': 'first'}
cols_order = ['title', 'field1', 'field2', 'field3','field4']
funCSV = fun(df, cols_to_aggregate, cols_order)
return funCSV
print (create_csv(df.replace({'field4':{0:np.nan}}), '2015-01').loc[lambda x: x['field2'].ne('{n/a}')])
title field1 field2 field3 field4
0 A A1 A11 553 9586.0
1 A A1 A12 94 9586.0
2 A A1 A13 30 9586.0
4 A A2 A21 200 3950.0
6 A A3 A31 35 2929.0
</code></pre> | python|pandas|csv | 2 |
70 | 72,186,132 | TypeError: 'module' object is not callable when using Keras | <p>I've been having lots of imports issues when it comes to TensorFlow and Keras and now I stumbled upon this error:</p>
<pre><code>TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_17880/703187089.py in <module>
75 #model.compile(loss="categorical_crossentropy",optimizers.rmsprop(lr=0.0001),metrics=["accuracy"])
76
---> 77 model.compile(optimizers.rmsprop_v2(lr=0.0001, decay=1e-6),loss="categorical_crossentropy",metrics=["accuracy"])
78
79 STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size
TypeError: 'module' object is not callable
</code></pre>
<p>These are the imports:</p>
<pre><code>from tensorflow import keras
from keras_preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization
from keras.layers import Conv2D, MaxPooling2D
from keras import regularizers, optimizers
from keras.models import Sequential
from keras import optimizers
from keras.optimizers import rmsprop_v2, adadelta_v2
</code></pre> | <p><code>kerns.optimizers.rmsprop_v2</code> and <code>kerns.optimizers.adadelta_v2 </code> are the modules. You want:</p>
<pre><code>from keras.optimizers import RMSprop, Adadelta
</code></pre>
<p>And:</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/RMSprop" rel="nofollow noreferrer"><code>optimizers.RMSprop(lr=0.0001, decay=1e-6)</code></a> (or just <code>RMSprop(lr=0.0001, decay=1e-6)</code>) instead of <code>optimizers.rmsprop_v2(lr=0.0001, decay=1e-6)</code></p> | python|tensorflow|keras | 1 |
71 | 72,396,287 | Sort pandas Series both on values and index | <p>I want to sort a Series in descending by value but also I need to respect the alphabetical order of the index.
Suppose the Series is like this:</p>
<pre><code>(index)
a 2
b 5
d 3
z 1
t 1
g 2
n 3
l 6
f 6
f 7
</code></pre>
<p>I need to convert it to the following Series without converting to DataFrame and then convert it to Series,</p>
<p>out:</p>
<pre><code>(index)
f 7
f 6
l 6
b 5
d 3
n 3
a 2
g 2
t 1
z 1
</code></pre>
<p>I used <code>lexsort</code> but It wasn't suitable. It sorts both the value and index in ascending.</p> | <p>You can first sort the index, then sort the values with a stable algorithm:</p>
<pre><code>s.sort_index().sort_values(ascending=False, kind='stable')
</code></pre>
<p>output:</p>
<pre><code>f 7
l 6
b 5
d 3
n 3
a 2
g 2
t 1
z 1
dtype: int64
</code></pre>
<p>used input:</p>
<pre><code>s = pd.Series({'a': 2, 'b': 5, 'd': 3, 'z': 1, 't': 1, 'g': 2, 'n': 3, 'l': 6, 'f': 7})
</code></pre> | python|pandas|sorting|series|numpy-ndarray | 1 |
72 | 50,540,768 | Tensorflow linear regression house prices | <p>I am trying to solve a linear regression problem using neural networks but my loss is coming to the power of 10 and is not reducing for training. I am using the house price prediction dataset(<a href="https://www.kaggle.com/c/house-prices-advanced-regression-techniques" rel="nofollow noreferrer">https://www.kaggle.com/c/house-prices-advanced-regression-techniques</a>) and can't figure whats going wrong. Please help someone</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(df2, y, test_size=0.2)
X_tr=np.array(X_train)
y_tr=np.array(y_train)
X_te=np.array(X_test)
y_te=np.array(y_test)
def get_weights(shape,name): #(no of neurons*no of columns)
s=tf.truncated_normal(shape)
w=tf.Variable(s,name=name)
return w
def get_bias(number,name):
s=tf.truncated_normal([number])
b=tf.Variable(s,name=name)
return b
x=tf.placeholder(tf.float32,name="input")
w=get_weights([34,100],'layer1')
b=get_bias(100,'bias1')
op=tf.matmul(x,w)+b
a=tf.nn.relu(op)
fl=get_weights([100,1],'output')
b2=get_bias(1,'bias2')
op2=tf.matmul(a,fl)+b2
y=tf.placeholder(tf.float32,name='target')
loss=tf.losses.mean_squared_error(y,op2)
optimizer = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
with tf.Session() as sess:
for i in range(0,1000):
sess.run(tf.global_variables_initializer())
_,l=sess.run([optimizer,loss],feed_dict={x:X_tr,y:y_tr})
print(l)
</code></pre> | <p>You are simple randomly initializing the variables in every training step. Just call <code>sess.run(tf.global_variables_initializer())</code> only once before the loop. </p> | tensorflow|linear-regression | 1 |
73 | 50,242,364 | Multiple Aggregate Functions based on Multiple Columns in Pandas | <p>I am working with a Pandas df in Python. I have the following input df:</p>
<pre><code>Color Shape Value
Blue Square 5
Red Square 2
Green Square 7
Blue Circle 9
Blue Square 2
Green Circle 6
Red Circle 2
Blue Square 5
Blue Circle 1
</code></pre>
<p>I would like the following output:</p>
<pre><code>Color Shape Count Sum
Blue Square 3 12
Red Square 1 2
Green Square 1 7
Blue Circle 2 10
Green Circle 1 6
Red Circle 1 2
</code></pre>
<p>Looking for something like pivot_table() but do not want the hierarchical index. </p> | <p>OK, so I did more research and will answer this one myself, because it may be helpful for others.</p>
<p>The problem I am having is associated with indexing more than pivot tables. To remove the multiple index a simple: </p>
<pre><code>df.reset_index()
</code></pre>
<p>does the trick just fine.</p>
<p>As a side note, I don't understand why a question like this would be down-voted. It is something not obvious in the documentation, or any of the literature I have read. It simply involves gaining a deeper insight into how these modules work, which is why people come here.</p>
<p>To down-vote something like this is, frankly, smug. In my opinion it defeats the purpose of this site.</p> | python|pandas|pivot-table | 1 |
74 | 50,383,480 | Adding a pickle-able attribute to a subclass of numpy.ndarray | <p>I would like to add a property (<code>.csys</code>) to a subclass of numpy.ndarray:</p>
<pre><code>import numpy as np
class Point(np.ndarray):
def __new__(cls, arr, csys=None):
obj = np.asarray(arr, dtype=np.float64).view(cls)
obj._csys = csys
return obj
def __array_finalize__(self, obj):
if obj is None: return
self._csys = getattr(obj, '_csys', None)
@property
def csys(self):
print('Getting .csys')
return self._csys
@csys.setter
def csys(self, csys):
print('Setting .csys')
self._csys = csys
</code></pre>
<p>However, when I run this test code:</p>
<pre><code>pt = Point([1, 2, 3])
pt.csys = 'cmm'
print("pt.csys:", pt.csys)
# Pickle, un-pickle, and check again
import pickle
pklstr = pickle.dumps(pt)
ppt = pickle.loads(pklstr)
print("ppt.csys:", ppt.csys)
</code></pre>
<p>it appears that the attribute cannot be pickled:</p>
<pre><code>Setting .csys
Getting .csys
pt.csys: cmm
Getting .csys
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
C:\Rut\Vanes\bin\pointtest.py in <module>()
39 ppt = pickle.loads(pklstr)
40
---> 41 print("ppt.csys:", ppt.csys)
C:\Rut\Vanes\bin\point.py in csys(self)
15 def csys(self):
16 print('Getting .csys')
---> 17 return self._csys
18
19 @csys.setter
AttributeError: 'Point' object has no attribute '_csys'
</code></pre>
<p>I tried doing the same thing without using decorators (e.g. defining <code>get_csys()</code> and <code>set_csys()</code>, plus <code>csys = property(__get_csys, __set_csys)</code>, but had the same result with that.</p>
<p>I'm using numpy 1.13.3 under Python 3.6.3</p> | <p>This question has already been asked and answered <a href="https://stackoverflow.com/questions/26598109/preserve-custom-attributes-when-pickling-subclass-of-numpy-array">here</a>. In a nutshell, numpy uses <code>__reduce__</code> and <code>__setstage__</code> to pickle itself. The overrides, adapted to the case above, look like this:</p>
<pre><code>def __reduce__(self):
# Get the parent's __reduce__ tuple
pickled_state = super().__reduce__()
# Create our own tuple to pass to __setstate__
new_state = pickled_state[2] + (self._csys,)
# Return a tuple that replaces the parent's __setstate__ tuple with our own
return (pickled_state[0], pickled_state[1], new_state)
def __setstate__(self, state):
self._csys = state[-1] # Set the _csys attribute
# Call the parent's __setstate__ with the other tuple elements.
super().__setstate__(state[0:-1])
</code></pre>
<p>Also note that the getter and setter methods (under the <code>@property</code> and <code>@csys.getter</code> decorators, respectively) are not strictly required in this simple case. If they are dispensed with, access <code>.csys</code> directly, rather than through the 'private' <code>._csys</code> attribute. </p> | python-3.x|numpy|pickle | 0 |
75 | 45,599,988 | Looking for help on installing a numpy extension | <p>I found a numpy extension on github that would be really helpful for a program I'm currently writting, however I don't know how to install it.</p>
<p>Here's the link to the extension: <a href="https://pypi.python.org/pypi?name=py_find_1st&:action=display" rel="nofollow noreferrer">https://pypi.python.org/pypi?name=py_find_1st&:action=display</a></p>
<p>I'm using windows 10 which might be the reason why the installer provided doesn't work, I found a file looking like a numpy extension as described here: <a href="https://docs.scipy.org/doc/numpy-1.10.0/user/c-info.how-to-extend.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.10.0/user/c-info.how-to-extend.html</a></p>
<p>But there's no mention on this page of where to put the code of the numpy extension, and I didn't manage to find any explanations online.</p>
<p>Would anyone have an idea on how to install this?</p> | <p>To build any extension modules for Python, you’ll need a <code>C compiler</code>. Various <code>NumPy</code> modules use <code>FORTRAN 77</code> libraries, so you’ll also need a <code>FORTRAN 77</code> compiler installed.</p>
<p>However, if you just want to install the tar.gz file that they have on the website, follow these steps:</p>
<ol>
<li>Open cmd (Command Prompt)</li>
<li>Write <code>set path=%path%;C:\Python27\</code></li>
<li>Extract the tar.gz file (use a program like PeaZip)</li>
<li>Change directories within the command line (if you are confused on how to do this look <a href="http://coweb.cc.gatech.edu/ice-gt/339" rel="nofollow noreferrer">here</a> for reference)</li>
<li>Get to your files' directory (something like <code>cd c:\Users\pdxNat\Downloads\py_find_1st1.0.6</code>)</li>
<li>Run <code>python setup.py install</code></li>
</ol> | python|python-3.x|numpy | 1 |
76 | 62,709,674 | How to split a dataframe heaving a list of column values and counts? | <p>I have a CSV based dataframe</p>
<pre><code>name value
A 5
B 5
C 5
D 1
E 2
F 1
</code></pre>
<p>and a values count dictionary like this:</p>
<pre><code>{
5: 2,
1: 1
}
</code></pre>
<p>How to split original dataframe into two:</p>
<pre><code>name value
A 5
B 5
D 1
name value
C 5
E 2
F 1
</code></pre>
<p>So how to split a dataframe heaving a list of column values and counts in pandas?</p> | <p>This worked for me:</p>
<pre><code>def target_indices(df, value_count):
indices = []
for index, row in df.iterrows():
for key in value_count:
if key == row['value'] and value_count[key] > 0:
indices.append(index)
value_count[key] -= 1
return(indices)
df = pd.DataFrame({'name': ['A', 'B', 'C', 'D', 'E', 'F'], 'value': [5, 5, 5, 1, 2, 1]})
value_count = {5: 2, 1: 1}
indices = target_indices(df, value_count)
df1 = df.iloc[indices]
print(df1)
df2 = df.drop(indices)
print(df2)
</code></pre>
<p>Output:</p>
<pre><code> name value
0 A 5
1 B 5
3 D 1
name value
2 C 5
4 E 2
5 F 1
</code></pre> | python|pandas | 1 |
77 | 62,704,351 | sentiment analysis using python pandas and scikit learn | <p>I have a dataset of product review.I want to count words in a way that instead of counting all the words I want to count some specific words like ('Amazing','Great','Love' etc) and put this counting in a column called 'word_count'.Now our goal is to create a column products[‘awesome’] where each row contains the number of times the word ‘awesome’ showed up in the review for the corresponding product.we will use the .apply() method to iterate the the logic above for each row of the ‘word_count’ column.</p>
<p>First,we have to use a Python function to define the logic above. we have to write a function called awesome_count which takes in the word counts and returns the number of times ‘awesome’ appears in the reviews.</p>
<p>Next, we have to use .apply() to iterate awesome_count for each row of ‘word_count’ and create a new column called ‘awesome’ with the resulting counts. Here is what that looks like:
<strong>products['awesome'] = products['word_count'].apply(awesome_count)</strong></p>
<p>Can anyone please help me with the code need for the problem mentioned above.
Thanks in advance.</p> | <p>Alright I lied; for standalone getting word frequency over a corpus we can combine pandas and numpy like so:</p>
<pre><code>word_A = np.array(df.series.str.findall('word'))
getlength = np.vectorize(len)
getlength(word_A)
</code></pre>
<p><em><strong>Whats going on under the hood:</strong></em></p>
<p><strong>LINE1</strong></p>
<ul>
<li><code>pd.series.str</code> to convert the series to string;</li>
<li><code>str.finall()</code> return all occurrences of the <em>"pattern"</em> matched to a list element (findall is a re function); since we're inputting a string, it's going to return the string over and over each time it's matched. The result will be X number of list elements, where X is the amount of documents you searched, and Y strings of your word in each element, where Y is a copy of the string for each time it matched in that document;</li>
</ul>
<p>For example, if you have 3 documents, and each document has the following matches of 'word': 1, 2, 4, you're list will look like:</p>
<pre><code>[['word'], ['word', 'word'], ['word', 'word', 'word', 'word']]
</code></pre>
<ul>
<li><p><code>np.array()</code> converts the list to an array (<em>so we can vectorize it</em>);</p>
<p><code>array([list(['word']), list(['word', 'word']), list(['word', 'word', 'word', 'word'])], dtype=object)</code></p>
</li>
</ul>
<p><strong>LINE2</strong></p>
<ul>
<li><code>np.vectorize()</code> makes the function we pass it a <em><strong>vectorized function</strong></em> (primes it for numpy broadcasting);</li>
</ul>
<p><strong>LINE3</strong></p>
<ul>
<li>apply our <em><strong>vectorized</strong></em> function to the array;</li>
</ul>
<p>When the vectorized function (In this case, len()) is called in line 3, it goes through each array element and applies that function. The result is an array that index matches the initial document series, but contains an integer count of the search term. For example:</p>
<pre><code>array([ 1, 2, 4])
</code></pre>
<p><em><strong>Note:</strong></em> I'd still recommend going the <code>slightly</code> longer route - and at least preprocessing your data before you do frequency statistics.</p>
<p>Hope this helps!</p>
<h2><strong>update</strong></h2>
<p>How many words are you wanting, and how big is your dataset?</p>
<p>If your computer can handle it, you can use Sklearn’s CountVectorizer to transform your corpus into a dense matrix of ‘document rows’ (documents x words) where each value is the frequency of the word in that document.</p>
<p>From there, you can query the documents relative to their category using document indexes and get aggregate counts for all of the words.</p>
<p>This approach is more computationally rigorous and will take longer computing, but if you are going to be drawing a lot of EDA from the frequency it’s a good idea to just get the data in a matrix/frame if you can store it.</p>
<p>If you’re only doing a couple of words and aren’t expecting to do much analysis, then we can use NumPy arrays to return the index position of a document each time the word is found, then sum all of the returns (ie word frequency across the documents you searched).</p>
<p>Ideally before you aggregate word frequency and whatnot we want to preprocess the data (I.e. remove non-word characters, make lowercase, tokenize, lemmatize, etc.) This way you have better accuracy collecting your ‘Features,’ ie the words. “This is amazing!”, “I’m so amazed”, and, “She amazes me.” All return a different ‘token’ or feature for ‘Amaze’ though they all use it similarly.</p>
<p>If we don’t need to preprocess and we aren’t making many observations or data manipulations then we can do a quick array to capture words. You’ll still probably want to manually alter your words (Amazing -> amaz, etc), lowercase them, and tokenize your document strings.</p>
<p>An alternative approach would be to use regex and .apply() with a user function that appends the return of re.findall() to a list; but this is computationally inefficient so really only good for a handful of words across <500,000 documents; and even then depending on your processing power that’ll take minutes.</p>
<p>Or you might use listcomp to set the value directly to the cell location.</p>
<p>Sorry I’m not at computer; will check back later and add some code when I can. Let me know a little more about your dataset size please. Thanks!</p> | python|pandas|scikit-learn|sentiment-analysis | 0 |
78 | 62,576,811 | Need help in debugging Shallow Neural network using numpy | <p>I'm doing a hands-on for learning and have created a model in python using numpy that's being trained on breast cancer dataSet from sklearn library. Model is running without any error and giving me Train and Test accuracy as 92.48826291079813% and 90.9090909090909% respectively. However somehow I'm not able to complete the hands-on since (probably) my result is different than expected. I don't know where the problem is because I don't know the right answer, also don't see any error.</p>
<p>Would request someone to help me with this. Code is given below.</p>
<pre><code>#Import numpy as np and pandas as pd
"""
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
**Define method initialiseNetwork() initilise weights with zeros of shape(num_features, 1) and also bias b to zero
parameters: num_features(number of input features)
returns : dictionary of weight vector and bias**
def initialiseNetwork(num_features):
W = np.zeros((num_features,1))
b = 0
parameters = {"W": W, "b": b}
return parameters
** define function sigmoid for the input z.
parameters: z
returns: $1/(1+e^{(-z)})$ **
def sigmoid(z):
a = 1/(1 + np.exp(-z))
return a
** Define method forwardPropagation() which implements forward propagtion defined as Z = (W.T dot_product X) + b, A = sigmoid(Z)
parameters: X, parameters
returns: A **
def forwardPropagation(X, parameters):
W = parameters["W"]
b = parameters["b"]
Z = np.dot(W.T,X) + b
A = sigmoid(Z)
return A
** Define function cost() which calculate the cost given by −(sum(Y\*log(A)+(1−Y)\*log(1−A)))/num_samples, here * is elementwise product
parameters: A,Y,num_samples(number of samples)
returns: cost **
def cost(A, Y, num_samples):
cost = -1/num_samples * np.sum(Y*np.log(A) + (1-Y)*(np.log(1-A)))
#cost = Y*np.log(A) + (1-Y)*(np.log(1-A))
return cost
** Define method backPropgation() to get the derivatives of weigths and bias
parameters: X,Y,A,num_samples
returns: dW,db **
def backPropagration(X, Y, A, num_samples):
dZ = A - Y
dW = (np.dot(X,dZ.T))/num_samples #(X dot_product dZ.T)/num_samples
db = np.sum(dZ)/num_samples #sum(dZ)/num_samples
return dW, db
** Define function updateParameters() to update current parameters with its derivatives
w = w - learning_rate \* dw
b = b - learning_rate \* db
parameters: parameters,dW,db, learning_rate
returns: dictionary of updated parameters **
def updateParameters(parameters, dW, db, learning_rate):
W = parameters["W"] - (learning_rate * dW)
b = parameters["b"] - (learning_rate * db)
return {"W": W, "b": b}
** Define the model for forward propagation
parameters: X,Y, num_iter(number of iterations), learning_rate
returns: parameters(dictionary of updated weights and bias) **
def model(X, Y, num_iter, learning_rate):
num_features = X.shape[0]
num_samples = X.shape[1]
parameters = initialiseNetwork(num_features) #call initialiseNetwork()
for i in range(num_iter):
#A = forwardPropagation(X, Y, parameters) # calculate final output A from forwardPropagation()
A = forwardPropagation(X, parameters)
if(i%100 == 0):
print("cost after {} iteration: {}".format(i, cost(A, Y, num_samples)))
dW, db = backPropagration(X, Y, A, num_samples) # calculate derivatives from backpropagation
parameters = updateParameters(parameters, dW, db, learning_rate) # update parameters
return parameters
** Run the below cell to define the function to predict the output.It takes updated parameters and input data as function parameters and returns the predicted output **
def predict(X, parameters):
W = parameters["W"]
b = parameters["b"]
b = b.reshape(b.shape[0],1)
Z = np.dot(W.T,X) + b
Y = np.array([1 if y > 0.5 else 0 for y in sigmoid(Z[0])]).reshape(1,len(Z[0]))
return Y
** The code in the below cell loads the breast cancer data set from sklearn.
The input variable(X_cancer) is about the dimensions of tumor cell and targrt variable(y_cancer) classifies tumor as malignant(0) or benign(1) **
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
** Split the data into train and test set using train_test_split(). Set the random state to 25. Refer the code snippet in topic 4 **
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 25)
** Since the dimensions of tumor is not uniform you need to normalize the data before feeding to the network
The below function is used to normalize the input data. **
def normalize(data):
col_max = np.max(data, axis = 0)
col_min = np.min(data, axis = 0)
return np.divide(data - col_min, col_max - col_min)
** Normalize X_train and X_test and assign it to X_train_n and X_test_n respectively **
X_train_n = normalize(X_train)
X_test_n = normalize(X_test)
** Transpose X_train_n and X_test_n so that rows represents features and column represents the samples
Reshape Y_train and y_test into row vector whose length is equal to number of samples.Use np.reshape() **
X_trainT = X_train_n.T
#print(X_trainT.shape)
X_testT = X_test_n.T
#print(X_testT.shape)
y_trainT = y_train.reshape(1,X_trainT.shape[1])
y_testT = y_test.reshape(1,X_testT.shape[1])
** Train the network using X_trainT,y_trainT with number of iterations 4000 and learning rate 0.75 **
parameters = model(X_trainT, y_trainT, 4000, 0.75) #call the model() function with parametrs mentioned in the above cell
** Predict the output of test and train data using X_trainT and X_testT using predict() method> Use the parametes returned from the trained model **
yPredTrain = predict(X_trainT, parameters) # pass weigths and bias from parameters dictionary and X_trainT as input to the function
yPredTest = predict(X_testT, parameters) # pass the same parameters but X_testT as input data
** Run the below cell print the accuracy of model on train and test data. ***
accuracy_train = 100 - np.mean(np.abs(yPredTrain - y_trainT)) * 100
accuracy_test = 100 - np.mean(np.abs(yPredTest - y_testT)) * 100
print("train accuracy: {} %".format(accuracy_train))
print("test accuracy: {} %".format(accuracy_test))
</code></pre>
<hr />
<p>My Output:
train accuracy: 92.48826291079813 %
test accuracy: 90.9090909090909 %</p> | <p>I figured out where the problem was. It was the third line in predict function where I was reshaping bias which was not at all necessary.</p>
<pre><code>def predict(X, parameters):
W = parameters["W"]
b = parameters["b"]
**b = b.reshape(b.shape[0],1)**
Z = np.dot(W.T,X) + b
Y = np.array([1 if y > 0.5 else 0 for y in sigmoid(Z[0])]).reshape(1,len(Z[0]))
return Y
</code></pre>
<p>and third line in back-propagation function needed to be corrected as np.sum(dZ)/num_samples.</p>
<pre><code>def backPropagration(X, Y, A, num_samples):
dZ = A - Y
dW = (np.dot(X,dZ.T))/num_samples
** db = sum(dZ)/num_samples **
return dW, db
</code></pre>
<p>After I corrected both functions, the model gave me train accuracy as 98.59154929577464% and test accuracy as 93.00699300699301%.</p> | python|numpy|neural-network | 0 |
79 | 62,637,487 | Numpy Histogram over very tiny floats | <p>I have an array with small float numbers, here is an exempt:</p>
<pre><code>[-0.000631510156545283, 0.0005999252334386763, 2.6784775066479167e-05,
-6.171351407584846e-05, -2.0256783283654057e-05, -5.700196588437318e-05,
0.0006830172130385885, -7.862102776837944e-06, 0.0008167604859504389,
0.0004497656945683915, -0.00017132944173890756, -0.00013510823579343265,
0.00019666267095029728, -9.0271602657355e-06, 0.0005219852103996746,
4.010928726736523e-05, -0.0005287787999295592, 0.00023883106926381664,
0.0006348661301799839, 0.0003881285984411852]
</code></pre>
<p>(Edit: The whole array contains ~40k floats)</p>
<p>The numbers show the change of a measurement over time, e.g. +0.0001 means the measurement increases by 0.0001.</p>
<p>I'd like to plot a histogram over the whole array. Currently, <code>pyplot.hist</code> creates a plot which plugs all values in one bin (<a href="https://i.stack.imgur.com/op2lU.png" rel="nofollow noreferrer">This image shows the current histogram.</a>, created with the following code (edited):</p>
<pre><code>import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 1, figsize=(20,20))
array = [] # floats here
axs.hist(array,bins=10)
axs.set_ylabel("Histogram of temperature/weight ratio")
axs.set_xlabel("Bins")
</code></pre>
<p>).
I guess this is due to the very small numbers - am I right here?</p>
<p>I tried using <code>hist, bins = numpy.histogram()</code> and plot this, with the same results. (Following this question <a href="https://stackoverflow.com/questions/17753501/numpy-histogram-representing-floats-with-approximate-values-as-the-same">here</a>).</p>
<p>How can I create a histogram over such small numbers, so that the values are distributed over e.g. 100 bins, and not all plugged into the first bin? Do I need to preprocess my data?</p> | <p>For other people looking for an answer:</p>
<p>As Jody Klymak suggested in a comment to my question, manually specify the bins.
I did not need to preprocess the data further, as I thought I had to do.</p>
<p>Example:</p>
<pre><code>import matplotlib.pyplot as plt
import bumpy as np
array = [...] # large array with tiny floats
fig, axs = plt.subplots(1, 1, figsize=(20,20))
hist = axs.hist(array, np.arange(-0.01, 0.01, 0.0001)) #numpy to create bins over range
plt.show()
</code></pre> | python|numpy|matplotlib|histogram | 1 |
80 | 62,826,624 | Change bar colors in pandas matplotlib bar chart by passing a list/tuple | <p>There are several threads on this topic, but none of them seem to directly address my question. I would like to plot a bar chart from a pandas dataframe with a custom color scheme that does not rely on a map, e.g. use an arbitrary list of colors. It looks like I can pass a concatenated string with color shorthand names (first example below). When I use the suggestion <a href="https://stackoverflow.com/questions/26793165/pandas-matplotlib-bar-chart-with-colors-defined-by-column">here</a>, the first color is repeated (see second example below). There is a comment in that post which eludes to the same behavior I am observing. Of course, I could do this by setting the subplot, but I'm lazy and want to do it in one line. So, I'd like to use the final example where I pass in a list of hex codes and it works as expected. I'm using pandas versions >=0.24 and matplotlib versions >1.5. My questions are:</p>
<ul>
<li>Why does this happen?</li>
<li>What am I doing wrong?</li>
<li>Can I pass a list of colors?</li>
</ul>
<hr />
<pre><code>pd.DataFrame( [ 1, 2, 3, 4, 5 ] ).plot( kind="bar", color="brgmk" )
</code></pre>
<p><a href="https://i.stack.imgur.com/vHbXG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vHbXG.png" alt="enter image description here" /></a></p>
<pre><code>pd.DataFrame( [ 1, 2, 3, 4, 5 ] ).plot( kind="bar", color=[ "b", "r", "g", "m", "k" ] )
</code></pre>
<p><a href="https://i.stack.imgur.com/mOhnP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mOhnP.png" alt="enter image description here" /></a></p>
<pre><code>pd.DataFrame( [ 1, 2, 3, 4, 5 ] ).plot( kind="bar", color=[ "#0000FF", "#FF0000", "#008000", "#FF00FF", "#000000" ] )
</code></pre>
<p><a href="https://i.stack.imgur.com/mOhnP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mOhnP.png" alt="enter image description here" /></a></p> | <p>When plotting a dataframe, the first color information is used for the first column, the second for the second column etc. Color information may be just one value that is then used for all rows of this column, or multiple values that are used one-by-one for each row of the column (repeated from the beginning if more rows than colors). See the following example:</p>
<pre><code>pd.DataFrame( [[ 1, 4], [2, 5], [3, 6]] ).plot(kind="bar", color=[[ "b", "r", "g" ], "m"] )
</code></pre>
<p><a href="https://i.stack.imgur.com/Ozf4j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ozf4j.png" alt="enter image description here" /></a></p>
<p>So in your case you just need to put the list of color values in a list (specifically not a tuple):</p>
<pre><code>pd.DataFrame( [ 1, 2, 3, 4, 5 ] ).plot( kind="bar", color=[[ "b", "r", "g", "m", "k" ]] )
</code></pre>
<p>or</p>
<pre><code>pd.DataFrame( [ 1, 2, 3, 4, 5 ] ).plot( kind="bar", color=[[ "#0000FF", "#FF0000", "#008000", "#FF00FF", "#000000" ]] )
</code></pre>
<hr>
<p>The first case in the OP (<code>color="brgmk"</code>) works as expected as pandas <a href="https://github.com/pandas-dev/pandas/blob/2c3edaaaa0475841349659297039a606a72e9273/pandas/plotting/_matplotlib/core.py#L206-L212" rel="nofollow noreferrer">internally puts the color string in a list</a> (<a href="https://github.com/pandas-dev/pandas/blob/2c3edaaaa0475841349659297039a606a72e9273/pandas/_libs/lib.pyx#L956" rel="nofollow noreferrer">strings are not considered list-like</a>).</p> | python|pandas|matplotlib | 3 |
81 | 54,519,021 | How to compute hash of all the columns in Pandas Dataframe? | <p><code>df.apply</code> is a method that can apply a certain function to all the columns in a dataframe, or the required columns. However, my aim is to compute the hash of a string: this string is the concatenation of all the values in a row corresponding to all the columns. My current code is returning <code>NaN</code>.</p>
<p>The current code is:</p>
<pre><code>df["row_hash"] = df["row_hash"].apply(self.hash_string)
</code></pre>
<p>The function <code>self.hash_string</code> is:</p>
<pre><code>def hash_string(self, value):
return (sha1(str(value).encode('utf-8')).hexdigest())
</code></pre>
<p>Yes, it would be easier to merge all columns of Pandas dataframe but <a href="https://stackoverflow.com/questions/48290687/merging-all-columns-of-pandas-dataframes">current answer</a> couldn't help me either.</p>
<p>The file that I am reading is(the first 10 rows):</p>
<pre><code>16012,16013,16014,16015,16016,16017,16018,16019,16020,16021,16022
16013,16014,16015,16016,16017,16018,16019,16020,16021,16022,16023
16014,16015,16016,16017,16018,16019,16020,16021,16022,16023,16024
16015,16016,16017,16018,16019,16020,16021,16022,16023,16024,16025
16016,16017,16018,16019,16020,16021,16022,16023,16024,16025,16026
</code></pre>
<p>The col names are: <code>col_test_1, col_test_2, .... , col_test_11</code></p> | <p>You can create a new column, which is concatenation of all others:</p>
<pre><code>df['new'] = df.astype(str).values.sum(axis=1)
</code></pre>
<p>And then apply your hash function on it</p>
<pre><code>df["row_hash"] = df["new"].apply(self.hash_string)
</code></pre>
<p>or this one-row should work:</p>
<pre><code>df["row_hash"] = df.astype(str).values.sum(axis=1).apply(hash_string)
</code></pre>
<p>However, not sure if you need a separate function here, so:</p>
<pre><code> df["row_hash"] = df.astype(str).values.sum(axis=1).apply(lambda x: sha1(str(x).encode('utf-8')).hexdigest())
</code></pre> | python|python-3.x|pandas | 4 |
82 | 73,713,141 | Why does all my emission mu of HMM in pyro converge to the same number? | <p>I'm trying to create a Gaussian HMM model in pyro to infer the parameters of a very simple Markov sequence. However, my model fails to infer the parameters and something wired happened during the training process. Using the same sequence, hmmlearn has successfully infer the true parameters.</p>
<p>Full code can be accessed in here:</p>
<blockquote>
<p>https://colab.research.google.com/drive/1u_4J-dg9Y1CDLwByJ6FL4oMWMFUVnVNd#scrollTo=ZJ4PzdTUBgJi</p>
</blockquote>
<p>My model is modified from the example in here:</p>
<blockquote>
<p>https://github.com/pyro-ppl/pyro/blob/dev/examples/hmm.py</p>
</blockquote>
<p>I manually created a first order Markov sequence where there are 3 states, the true means are [-10, 0, 10], sigmas are [1,2,1].</p>
<p>Here is my model</p>
<pre><code>def model(observations, num_state):
assert not torch._C._get_tracing_state()
with poutine.mask(mask = True):
p_transition = pyro.sample("p_transition",
dist.Dirichlet((1 / num_state) * torch.ones(num_state, num_state)).to_event(1))
p_init = pyro.sample("p_init",
dist.Dirichlet((1 / num_state) * torch.ones(num_state)))
p_mu = pyro.param(name = "p_mu",
init_tensor = torch.randn(num_state),
constraint = constraints.real)
p_tau = pyro.param(name = "p_tau",
init_tensor = torch.ones(num_state),
constraint = constraints.positive)
current_state = pyro.sample("x_0",
dist.Categorical(p_init),
infer = {"enumerate" : "parallel"})
for t in pyro.markov(range(1, len(observations))):
current_state = pyro.sample("x_{}".format(t),
dist.Categorical(Vindex(p_transition)[current_state, :]),
infer = {"enumerate" : "parallel"})
pyro.sample("y_{}".format(t),
dist.Normal(Vindex(p_mu)[current_state], Vindex(p_tau)[current_state]),
obs = observations[t])
</code></pre>
<p>My model is compiled as</p>
<pre><code>device = torch.device("cuda:0")
obs = torch.tensor(obs)
obs = obs.to(device)
torch.set_default_tensor_type("torch.cuda.FloatTensor")
guide = AutoDelta(poutine.block(model, expose_fn = lambda msg : msg["name"].startswith("p_")))
Elbo = Trace_ELBO
elbo = Elbo(max_plate_nesting = 1)
optim = Adam({"lr": 0.001})
svi = SVI(model, guide, optim, elbo)
</code></pre>
<p>As the training goes, the ELBO has decreased steadily as shown. However, the three means of the states converges.
<a href="https://i.stack.imgur.com/sRFMI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sRFMI.png" alt="ELBO and means" /></a></p>
<p>I have tried to put the for loop of my model into a pyro.plate and switch pyro.param to pyro.sample and vice versa, but nothing worked for my model.</p> | <p>I have not tried this model, but I think it should be possible to solve the problem by modifying the model in the following way:
def model(observations, num_state):</p>
<pre><code>assert not torch._C._get_tracing_state()
with poutine.mask(mask = True):
p_transition = pyro.sample("p_transition",
dist.Dirichlet((1 / num_state) * torch.ones(num_state, num_state)).to_event(1))
p_init = pyro.sample("p_init",
dist.Dirichlet((1 / num_state) * torch.ones(num_state)))
p_mu = pyro.sample("p_mu",
dist.Normal(torch.zeros(num_state), torch.ones(num_state)).to_event(1))
p_tau = pyro.sample("p_tau",
dist.HalfCauchy(torch.zeros(num_state)).to_event(1))
current_state = pyro.sample("x_0",
dist.Categorical(p_init),
infer = {"enumerate" : "parallel"})
for t in pyro.markov(range(1, len(observations))):
current_state = pyro.sample("x_{}".format(t),
dist.Categorical(Vindex(p_transition)[current_state, :]),
infer = {"enumerate" : "parallel"})
pyro.sample("y_{}".format(t),
dist.Normal(Vindex(p_mu)[current_state], Vindex(p_tau)[current_state]),
obs = observations[t])
</code></pre>
<p>The model would then be trained using MCMC:</p>
<pre><code># MCMC
hmc_kernel = NUTS(model, target_accept_prob = 0.9, max_tree_depth = 7)
mcmc = MCMC(hmc_kernel, num_samples = 1000, warmup_steps = 100, num_chains = 1)
mcmc.run(obs)
</code></pre>
<p>The results could then be analysed using:
<code>mcmc.get_samples()
</code></p> | machine-learning|pytorch|statistics|artificial-intelligence|pyro | 0 |
83 | 73,553,937 | How to aggregate of a datetime dataframe based on days and then how to calculate average? | <p>I have a dataframe with two columns, date and values.</p>
<pre><code>import numpy as np
import pandas as pd
import datetime
from pandas import Timestamp
a = [[Timestamp('2014-06-17 00:00:00'), 0.023088847378082145],
[Timestamp('2014-06-18 00:00:00'), -0.02137513226556209],
[Timestamp('2014-06-19 00:00:00'), -0.023107608748262454],
[Timestamp('2014-06-20 00:00:00'), -0.005373831609931101],
[Timestamp('2014-06-23 00:00:00'), 0.0013989552359290336],
[Timestamp('2014-06-24 00:00:00'), 0.02109937927428618],
[Timestamp('2014-06-25 00:00:00'), -0.008350303722982733],
[Timestamp('2014-06-26 00:00:00'), -0.037202662556428456],
[Timestamp('2014-06-27 00:00:00'), 0.00019764611153205713],
[Timestamp('2014-06-30 00:00:00'), 0.003260577288983324],
[Timestamp('2014-07-01 00:00:00'), -0.0072877596184343085],
[Timestamp('2014-07-02 00:00:00'), 0.010168645518006336],
[Timestamp('2014-07-03 00:00:00'), -0.011539447143668391],
[Timestamp('2014-07-04 00:00:00'), 0.025285678867997374],
[Timestamp('2014-07-07 00:00:00'), -0.004602922207492033],
[Timestamp('2014-07-08 00:00:00'), -0.031298707413768834],
[Timestamp('2014-07-09 00:00:00'), 0.005929355847110296],
[Timestamp('2014-07-10 00:00:00'), -0.0037464360290646592],
[Timestamp('2014-07-11 00:00:00'), -0.030786217361942203],
[Timestamp('2014-07-14 00:00:00'), -0.004914625647469917],
[Timestamp('2014-07-15 00:00:00'), 0.010865602291856957],
[Timestamp('2014-07-16 00:00:00'), 0.018000430446729165],
[Timestamp('2014-07-17 00:00:00'), -0.007274924758687407],
[Timestamp('2014-07-18 00:00:00'), -0.005852455583728933],
[Timestamp('2014-07-21 00:00:00'), 0.021397540863909104],
[Timestamp('2014-07-22 00:00:00'), 0.03337842963821558],
[Timestamp('2014-07-23 00:00:00'), 0.0022309307682939483],
[Timestamp('2014-07-24 00:00:00'), 0.007548983718178803],
[Timestamp('2014-07-25 00:00:00'), -0.018442920569716525],
[Timestamp('2014-07-28 00:00:00'), -0.015902529445214975]]
df = pd.DataFrame(a, columns=['dates', 'Values'])
</code></pre>
<p>I want to calculate the average of the column <strong>Values</strong> aggregating each 5 days. The expected outcome in dataframe should be something like</p>
<pre><code> Average value
0 avg of first 5days
1 avg of next 5days
2 avg of next 5days
3 avg of next 5days
4 avg of next 5days
5 avg of next 5days
</code></pre>
<p>If possible then please help me to get a dataframe something like the below,</p>
<pre><code> Group Days Average value
0 0 avg of first 5days
1 1 avg of next 5days
2 2 avg of next 5days
3 3 avg of next 5days
4 4 avg of next 5days
5 5 avg of next 5days
</code></pre>
<p>Please help me with this.</p> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>DataFrame.resample</code></a> with aggregate <code>mean</code> by <code>5D</code> for 5 days:</p>
<pre><code>df = df.resample('5D', on='dates')['Values'].mean().reset_index()
</code></pre>
<hr />
<pre><code>print (df)
dates Values
0 2014-06-17 -0.006692
1 2014-06-22 -0.005764
2 2014-06-27 -0.001277
3 2014-07-02 0.007972
4 2014-07-07 -0.012901
5 2014-07-12 0.007984
6 2014-07-17 0.002757
7 2014-07-22 0.006179
8 2014-07-27 -0.015903
</code></pre>
<p>EDIT: If need omit Sundays, Saturdays and holidays use <code>5B</code> for bussiness days:</p>
<pre><code>df = df.resample('5B', on='dates')['Values'].mean().reset_index()
print (df)
dates Values
0 2014-06-17 -0.005074
1 2014-06-24 -0.004199
2 2014-07-01 0.002405
3 2014-07-08 -0.012963
4 2014-07-15 0.007427
5 2014-07-22 0.001763
</code></pre>
<p>If need <code>Group</code> column use <code>arange</code>:</p>
<pre><code>df = df.resample('5D', on='dates')['Values'].mean().reset_index()
df['Group'] = np.arange(len(df))
print (df)
dates Values Group
0 2014-06-17 -0.006692 0
1 2014-06-22 -0.005764 1
2 2014-06-27 -0.001277 2
3 2014-07-02 0.007972 3
4 2014-07-07 -0.012901 4
5 2014-07-12 0.007984 5
6 2014-07-17 0.002757 6
7 2014-07-22 0.006179 7
8 2014-07-27 -0.015903 8
</code></pre> | python|pandas|dataframe|datetime | 4 |
84 | 73,755,801 | how to show pandas data frame data as bar graph? | <p>how to show pandas data frame data as bar graph?</p>
<p>I have the data like below,</p>
<pre><code>[{'index': 0, 'Year_Week': 670, 'Sales_CSVolume': 10},
{'index': 1, 'Year_Week': 680, 'Sales_CSVolume': 8},
{'index': 2, 'Year_Week': 700, 'Sales_CSVolume': 4},
{'index': 3, 'Year_Week': 850, 'Sales_CSVolume': 13}]
</code></pre>
<p>I want to draw a bar graph where <code>Year_Week</code> should be in <code>X-Axis</code> and <code>Sales_CSVolume</code> should be in <code>Y-axis</code> like below screenshot.</p>
<p><a href="https://i.stack.imgur.com/1UBk0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1UBk0.png" alt="enter image description here" /></a></p>
<p>I tried something like below,</p>
<pre><code>Year_Week = []
Sales_CSVolume = []
import matplotlib.pyplot as plt
for idx, elem in enumerate(data):
for key, value in elem.items():
Year_Week.append(key == 'Year_Week')
print(f"List element: {idx:>2} Key: {key:<20} Value: {value}")
plt.bar(Year_Week, Sales_CSVolume)
plt.show()
</code></pre>
<p>Above code is not work as per the screenshot graph. Can anyone help me to sort out this issue</p> | <pre><code>import pandas as pd
import seaborn as sns
data = [{'index': 0, 'Year_Week': 670, 'Sales_CSVolume': 10},
{'index': 1, 'Year_Week': 680, 'Sales_CSVolume': 8},
{'index': 2, 'Year_Week': 700, 'Sales_CSVolume': 4},
{'index': 3, 'Year_Week': 850, 'Sales_CSVolume': 13}]
df = pd.DataFrame(data)
df.set_index('index', inplace=True)
sns.barplot(data=df, x='Year_Week', y='Sales_CSVolume')
</code></pre>
<p><a href="https://i.stack.imgur.com/91kF5.png" rel="nofollow noreferrer">Result</a></p> | python|pandas|matplotlib | 1 |
85 | 71,358,446 | Pandas DataFrame subtraction is getting an unexpected result. Concatenating instead? | <p>I have two dataframes of the same size (510x6)</p>
<pre><code>preds
0 1 2 3 4 5
0 2.610270 -4.083780 3.381037 4.174977 2.743785 -0.766932
1 0.049673 0.731330 1.656028 -0.427514 -0.803391 -0.656469
2 -3.579314 3.347611 2.891815 -1.772502 1.505312 -1.852362
3 -0.558046 -1.290783 2.351023 4.669028 3.096437 0.383327
4 -3.215028 0.616974 5.917364 5.275736 7.201042 -0.735897
... ... ... ... ... ... ...
505 -2.178958 3.918007 8.247562 -0.523363 2.936684 -3.153375
506 0.736896 -1.571704 0.831026 2.673974 2.259796 -0.815212
507 -2.687474 -1.268576 -0.603680 5.571290 -3.516223 0.752697
508 0.182165 0.904990 4.690155 6.320494 -2.326415 2.241589
509 -1.675801 -1.602143 7.066843 2.881135 -5.278826 1.831972
510 rows × 6 columns
outputStats
0 1 2 3 4 5
0 2.610270 -4.083780 3.381037 4.174977 2.743785 -0.766932
1 0.049673 0.731330 1.656028 -0.427514 -0.803391 -0.656469
2 -3.579314 3.347611 2.891815 -1.772502 1.505312 -1.852362
3 -0.558046 -1.290783 2.351023 4.669028 3.096437 0.383327
4 -3.215028 0.616974 5.917364 5.275736 7.201042 -0.735897
... ... ... ... ... ... ...
505 -2.178958 3.918007 8.247562 -0.523363 2.936684 -3.153375
506 0.736896 -1.571704 0.831026 2.673974 2.259796 -0.815212
507 -2.687474 -1.268576 -0.603680 5.571290 -3.516223 0.752697
508 0.182165 0.904990 4.690155 6.320494 -2.326415 2.241589
509 -1.675801 -1.602143 7.066843 2.881135 -5.278826 1.831972
510 rows × 6 columns
</code></pre>
<p>when I execute:</p>
<pre><code>preds - outputStats
</code></pre>
<p>I expect a 510 x 6 dataframe with elementwise subtraction. Instead I get this:</p>
<pre><code> 0 1 2 3 4 5 0 1 2 3 4 5
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ...
505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
506 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
507 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
508 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
509 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>I've tried dropping columns and the like, and that hasn't helped. I also get the same result with preds.subtract(outputStats). Any Ideas?</p> | <p>There are many ways that two different values can appear the same when displayed. One of the main ways is if they are different types, but corresponding values for those types. For instance, depending on how you're displaying them, the int <code>1</code> and the str <code>'1'</code> may not be easily distinguished. You can also have whitespace characters, such as <code>'1'</code> versus <code>' 1'</code>.</p>
<p>If the problem is that one set is int while the other is str, you can solve the problem by converting them all to int or all to str. To do the former, do <code>df.columns = [int(col) for col in df.columns]</code>. To do the latter, <code>df.columns = [str(col) for col in df.columns]</code>. Converting to str is somewhat safer, as trying to convert to int can raise an error if the string isn't amenable to conversion (e.g. <code>int('y')</code> will raise an error), but int can be more usual as they have the numerical structure.</p>
<p>You asked in a comment about dropping columns. You can do this with <code>drop</code> and including <code>axis=1</code> as a parameter to tell it to drop columns rather than rows, or you can use the <code>del</code> keyword. But changing the column names should remove the need to drop columns.</p> | pandas|dataframe|python-3.7|subtraction | 0 |
86 | 71,252,285 | Group date column into n-day periods | <p>I need a function that groups date column into n-day periods with respect to some start and end dates (1 year interval). To assign a quarter (~90 day period) to every date in the data frame I used the code below, which is not very neat (and I want to reuse it for 30-day period as well)</p>
<pre><code>def get_quarter(row, start_date, col_name):
# date = row['TRN_DT']
date = row[col_name]
if date >= start_date and date <= start_date + timedelta(days = 90):
return 0
if date > start_date + timedelta(days = 90) and date <= start_date + timedelta(180):
return 1
if date > start_date + timedelta(180) and date <= start_date + timedelta(270):
return 2
return 3
</code></pre>
<p>It basically checks row by row which interval current date belongs to. I was wondering whether there is a better way to do this. pandas.Series.dt.to_period() will not do since it uses a calendar year as a reference --start 01.Jan, end 31.Dec; that is, 16.Jan.XXXX will always be in Q1; what I want is for 16.Jan to be in Q3 if the start date is 16-Jun. Thanks</p> | <p>FTR, a possible solution is to shift every date in the series according to the <code>start_date</code>, to simulate that <code>start_date</code> is the beginning of the year:</p>
<pre><code>>>> start_date = pd.to_datetime("2021-06-16")
>>> dates_series = pd.Series([pd.to_datetime("2020-01-16"), pd.to_datetime("2020-04-16")], name="dates")
0 1
1 2
Name: dates, dtype: int64
</code></pre>
<p>We calculate the difference between the current date and the beginning of the same year.</p>
<pre><code>>>> offset = start_date - start_date.replace(month=1, day=1)
>>> offset
166 days 00:00:00
</code></pre>
<p>We move all of our dates by the same offser to calculate the "new quarter"</p>
<pre><code>>>> (dates - offset).dt.quarter
0 3
1 4
Name: dates, dtype: int64
</code></pre> | python-3.x|pandas|dataframe|date|pandas-groupby | 1 |
87 | 72,780,603 | Format pandas dataframe output into a text file as a table (formatted and aligned to the max length of the data or header (which ever is longer)) | <pre><code>pd.DataFrame({
'ID': {
0: 11404371006,
1: 11404371007,
2: 11404371008,
3: 11404371009,
4: 11404371010,
5: 11404371011
},
'TABLE.F1': {
0: 'Y',
1: 'NULL',
2: 'N',
3: 'N',
4: 'N',
5: 'N'
},
'O': {
0: False,
1: False,
2: False,
3: False,
4: False,
5: False
}
})`enter code here`
</code></pre>
<p>I have the above data frame and would like to save the output in a file as a pipe delimited data like below.</p>
<p><a href="https://i.stack.imgur.com/j9nPZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j9nPZ.png" alt="Expected Output" /></a></p>
<p>So far I have tried pd.to_csv and pd.to_string(), both outputs the data in tabular format however, the data is not aligning to the max length of the column header or the data.</p>
<p>to_string()</p>
<p><a href="https://i.stack.imgur.com/a23hY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a23hY.png" alt="actual output with pd.to_string()" /></a></p>
<p>to_csv()</p>
<p><a href="https://i.stack.imgur.com/810uY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/810uY.png" alt="when using pd.to_csv(index=False, sep='|',line_terminator='|\n')" /></a></p> | <p>Use <code>to_markdown</code>:</p>
<pre><code>out = df.to_markdown(index=False, tablefmt='pipe', colalign=['center']*len(df.columns))
print(out)
# Output:
| ID | TABLE.F1 | O |
|:-----------:|:----------:|:-----:|
| 11404371006 | Y | False |
| 11404371007 | NULL | False |
| 11404371008 | N | False |
| 11404371009 | N | False |
| 11404371010 | N | False |
| 11404371011 | N | False |
</code></pre>
<p>To remove the second line:</p>
<pre><code>out = out.split('\n')
out.pop(1)
out = '\n'.join(out)
print(out)
# Output
| ID | TABLE.F1 | O |
| 11404371006 | Y | False |
| 11404371007 | NULL | False |
| 11404371008 | N | False |
| 11404371009 | N | False |
| 11404371010 | N | False |
| 11404371011 | N | False |
</code></pre> | python|pandas|dataframe | 3 |
88 | 72,672,399 | dataframe.str[start:stop] where start and stop are columns in same data frame | <p>I would like to use pandas.str to vectorize slice operation on pandas column which values are list and start and stop values are int values in start and stop columns of same dataframe
example:</p>
<pre><code>df['column_with_list_values'].str[start:stop]
df[['list_values', 'start', 'stop']]
list_values start stop
0 [5, 7, 6, 8] 0 2
1 [1, 3, 5, 7, 2, 4, 6, 8] 1 3
2 [1, 3, 5, 7, 2, 4, 6, 8] 0 2
3 [1, 3, 5, 7, 2, 4, 6, 8] 0 2
4 [1, 3, 5, 7, 2, 4, 6, 8] 1 3
5 [1, 3, 5, 7, 2, 4, 6, 8] 2 4
6 [1, 3, 5, 7, 2, 4, 6, 8] 0 2
and result would be
0 [5, 7]
1 [3, 5]
2 [1, 3]
3 [1, 3]
4 [3, 5]
5 [5, 7]
6 [1, 3]
</code></pre>
<p>Thanks!</p> | <pre><code>df.apply(lambda x: x.list_values[x.start:x.stop], axis=1)
</code></pre>
<p>Output:</p>
<pre><code>0 [5, 7]
1 [3, 5]
2 [1, 3]
3 [1, 3]
4 [3, 5]
5 [5, 7]
6 [1, 3]
dtype: object
</code></pre>
<hr />
<p>I'm not sure why, but the fastest variation appears to be:</p>
<pre><code>df['sliced'] = [lst[start:stop] for lst, start, stop in zip(df.list_values.tolist(), df.start.tolist(), df.stop.tolist())]
</code></pre>
<p>My testing:</p>
<pre><code>df = pd.DataFrame({'list_values': {0: [5, 7, 6, 8], 1: [1, 3, 5, 7, 2, 4, 6, 8], 2: [1, 3, 5, 7, 2, 4, 6, 8], 3: [1, 3, 5, 7, 2, 4, 6, 8], 4: [1, 3, 5, 7, 2, 4, 6, 8], 5: [1, 3, 5, 7, 2, 4, 6, 8], 6: [1, 3, 5, 7, 2, 4, 6, 8]}, 'start': {0: 0, 1: 1, 2: 0, 3: 0, 4: 1, 5: 2, 6: 0}, 'stop': {0: 2, 1: 3, 2: 2, 3: 2, 4: 3, 5: 4, 6: 2}})
df = pd.concat([df]*100000)
# Shape is now (700000, 3)
def v1(df):
temp = df.copy()
temp['sliced'] = [lst[start:stop] for lst, start, stop in temp.values.tolist()]
def v2(df):
temp = df.copy()
temp['sliced'] = [lst[start:stop] for lst, start, stop in zip(temp.list_values, temp.start, temp.stop)]
def v3(df):
temp = df.copy()
temp['sliced'] = [lst[start:stop] for lst, start, stop in temp.values]
def v4(df):
temp = df.copy()
temp['sliced'] = [lst[start:stop] for lst, start, stop in zip(df.list_values.tolist(), df.start.tolist(), df.stop.tolist())]
def v5(df):
temp = df.copy()
temp['sliced'] = temp.apply(lambda x: x.list_values[x.start:x.stop], axis=1)
%timeit -n 10 v1(df)
%timeit -n 10 v2(df)
%timeit -n 10 v3(df)
%timeit -n 10 v4(df)
%timeit v5(df)
</code></pre>
<p>Output:</p>
<pre><code># v1: temp.values.tolist()
235 ms ± 21.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# v2: zip(temp.list_values, temp.start, temp.stop)
249 ms ± 9.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# v3: temp.values
578 ms ± 6.98 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# v4: zip(df.list_values.tolist(), df.start.tolist(), df.stop.tolist())
149 ms ± 8.83 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# v5: apply
12.1 s ± 165 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>But yes, the list comprehension method, no matter what variation, is <em>significantly</em> faster than using <code>apply</code>.</p>
<hr />
<p>Third update:</p>
<p>I figured out how to sort of vectorize this problem, using groupby and transform. Still not quite as good as the best list comprehension in my testing, but pretty darn good.</p>
<pre><code>def v6(df):
temp = df.copy()
temp['sliced'] = temp.groupby(['start','stop'])['list_values'].transform(lambda x: x.str[x.name[0]:x.name[1]])
# v6: groupby
256 ms ± 5.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre> | python|pandas | 2 |
89 | 72,583,548 | How to find row index which contains given string? Python | <p>I would like to upgrade my scrip to analyze data. Instead of manualy checking row number to be header line i need to find the index row that contains specific string. Now i read csv directly to pandas dataframe with headerline defined like this:</p>
<pre><code>df1 = pd.read_csv('sensor_1.csv', sep=',', header=101)
</code></pre>
<p>How to read csv and find line with "Scan Number" text and put this variable to header definition?</p>
<p>I tried this:</p>
<pre><code>FileList = (glob.glob("sensor_1.csv"))
for FileToProcess in FileList:
with open(FileToProcess) as readfile:
for cnt,line in enumerate(readfile):
if "Scan Number" in line:
cnt
readfile.close
df1 = pd.read_csv('sensor_1.csv', sep=',', header= cnt)
</code></pre>
<p>But this gives highest index and error at the end :/
Could you please help?</p>
<p>Thanks
Paulina</p> | <pre><code>fille_ = open('sensor_1.csv', 'r')
lines = fille_.readlines()
cnt = 0
for i in range(0, len(lines)):
if lines[i].find('Scan Number') !=-1:
cnt = i
break
print(cnt)
</code></pre>
<p>When the search phrase is found in the string, the loop will print the index of the string and the loop will stop.</p> | python|pandas | 1 |
90 | 59,477,254 | Replacing states with country name pandas | <p>Is there a way to change state abbreviations into "USA" in a data frame :</p>
<pre><code>'CIBA GEIGY CORP,BASIC PHARMACEUT RES,ARDSLEY,NY 10502'
</code></pre>
<p>to </p>
<pre><code>'CIBA GEIGY CORP,BASIC PHARMACEUT RES,ARDSLEY,USA 10502'
</code></pre>
<p>I tried with a dictionary: <code>df.Authors.str.translate(us_states)</code> and also <code>.apply(lambda x: x.translate(us_states))</code> but it isn't working.
Do you have any ideas?</p>
<p>Dictionary with the changes that I need to make:</p>
<pre><code>us_states= {'AL': 'USA',
'AK': 'USA',
'AZ': 'USA',
'AR': 'USA',
'CA': 'USA',
'CO': 'USA',
'CT': 'USA',
'DE': 'USA',
'DC': 'USA',
'FL': 'USA',
'GA': 'USA',
'HI': 'USA',
'ID': 'USA',
'IL': 'USA',
'IN': 'USA',
'IA': 'USA',
'KS': 'USA',
'KY': 'USA',
'LA': 'USA',
'ME': 'USA',
'MD': 'USA',
'MA': 'USA',
'MI': 'USA',
'MN': 'USA',
'MS': 'USA',
'MO': 'USA',
'MT': 'USA',
'NE': 'USA',
'NV': 'USA',
'NH': 'USA',
'NJ': 'USA',
'NM': 'USA',
'NY': 'USA',
'NC': 'USA',
'ND': 'USA',
'MP': 'USA',
'OH': 'USA',
'OK': 'USA',
'OR': 'USA',
'PW': 'USA',
'PA': 'USA',
'PR': 'USA',
'RI': 'USA',
'SC': 'USA',
'SD': 'USA',
'TN': 'USA',
'TX': 'USA',
'UT': 'USA',
'VT': 'USA',
'VI': 'USA',
'VA': 'USA',
'WA': 'USA',
'WV': 'USA',
'WI': 'USA',
'WY': 'USA'}
</code></pre>
<p>So each abbreviation should turn into "USA"</p> | <p>I think you can just use <code>df.replace</code> (works for <code>pd.Series</code> too):</p>
<p><code>df['Authors'].replace(us_states, inplace=True, regex=True)</code>.</p>
<p>Documentation here:
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html</a></p> | python|python-3.x|pandas | 1 |
91 | 40,352,841 | How can I build a TF Graph that has separate inference and training parts? | <p>Referencing <a href="https://stackoverflow.com/questions/40340807/how-can-i-build-a-tf-graph-for-both-training-and-inference-with-tf-train-shuffle">this post</a> asked previously, as the suggestion was to create a graph that has separate inference and training parts.</p>
<p>Boilerplate code would be greatly appreciated.</p> | <p>MNIST convolution in the repository is an example -- <a href="https://github.com/tensorflow/tensorflow/blob/8e48ec6ea0492e2cb9fd19c0a2ccf41afc7b4dc6/tensorflow/models/image/mnist/convolutional.py" rel="nofollow">tensorflow/tensorflow/models/image/mnist/convolutional.py</a></p>
<p>It follows a pattern when you factor out model construction code into a function (<code>model</code> in <code>convolutional.py</code>), and call it separately for the eval and training parts</p>
<pre><code> logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits, train_labels_node))
eval_prediction = tf.nn.softmax(model(eval_data))
</code></pre>
<p>For training you feed into <code>train_data_node</code> and minimize <code>loss</code>, for eval, you feed into <code>eval_data</code> node and get the results at <code>eval_prediction</code></p> | tensorflow | 2 |
92 | 40,587,902 | Function to select from columns pandas df | <p>i have this test table in pandas dataframe</p>
<pre><code> Leaf_category_id session_id product_id
0 111 1 987
3 111 4 987
4 111 1 741
1 222 2 654
2 333 3 321
</code></pre>
<p><a href="https://i.stack.imgur.com/mfsgS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mfsgS.png" alt="enter image description here"></a></p>
<p>what i want is </p>
<pre><code>for leaf_category_id 111:
</code></pre>
<p>the result should be.</p>
<pre><code> session_id product_id
1 987,741
4 987
</code></pre>
<p>Similarly can i define a function that does the same for all the leaf_category id's, my table contains more rows, it was just a snapshot of it.</p> | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> first and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with apply <code>join</code>:</p>
<pre><code>df = pd.DataFrame({'Leaf_category_id':[111,111,111,222,333],
'session_id':[1,4,1,2,3],
'product_id':[987,987,741,654,321]},
columns =['Leaf_category_id','session_id','product_id'])
print (df)
Leaf_category_id session_id product_id
0 111 1 987
1 111 4 987
2 111 1 741
3 222 2 654
4 333 3 321
print (df[df.Leaf_category_id == 111]
.groupby('session_id')['product_id']
.apply(lambda x: ','.join(x.astype(str))))
session_id
1 987,741
4 987
Name: product_id, dtype: object
</code></pre>
<p>EDIT by comment:</p>
<pre><code>print (df.groupby(['Leaf_category_id','session_id'])['product_id']
.apply(lambda x: ','.join(x.astype(str)))
.reset_index())
Leaf_category_id session_id product_id
0 111 1 987,741
1 111 4 987
2 222 2 654
3 333 3 321
</code></pre>
<p>Or if need for each unique value in <code>Leaf_category_id</code> <code>DataFrame</code>:</p>
<pre><code>for i in df.Leaf_category_id.unique():
print (df[df.Leaf_category_id == i] \
.groupby('session_id')['product_id'] \
.apply(lambda x: ','.join(x.astype(str))) \
.reset_index())
session_id product_id
0 1 987,741
1 4 987
session_id product_id
0 2 654
session_id product_id
0 3 321
</code></pre> | python|pandas|numpy | 1 |
93 | 40,386,175 | Python: convert string array to int array in dataframe | <p>I have a data frame, duration is one of the attributes. The duration's content is like: </p>
<pre><code> array(['487', '346', ..., '227', '17']).
</code></pre>
<p>And the df.info(), I get: Data columns (total 22 columns):</p>
<pre><code> duration 2999 non-null object
campaign 2999 non-null object
...
</code></pre>
<p>Now I want to convert duration into int. Is there any solution?</p> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow noreferrer"><code>astype</code></a>:</p>
<pre><code>df['duration'] = df['duration'].astype(int)
</code></pre>
<p><strong>Timings</strong></p>
<p>Using the following setup to produce a large sample dataset:</p>
<pre><code>n = 10**5
data = list(map(str, np.random.randint(10**4, size=n)))
df = pd.DataFrame({'duration': data})
</code></pre>
<p>I get the following timings:</p>
<pre><code>%timeit -n 100 df['duration'].astype(int)
100 loops, best of 3: 10.9 ms per loop
%timeit -n 100 df['duration'].apply(int)
100 loops, best of 3: 44.3 ms per loop
%timeit -n 100 df['duration'].apply(lambda x: int(x))
100 loops, best of 3: 60.1 ms per loop
</code></pre> | python|pandas|numpy | 4 |
94 | 40,635,718 | Pandas merge column where between dates | <p>I have two dataframes - one of calls made to customers and another identifying active service durations by client. Each client can have multiple services, but they will not overlap. </p>
<pre><code>df_calls = pd.DataFrame([['A','2016-02-03',1],['A','2016-05-11',2],['A','2016-10-01',3],['A','2016-11-02',4],
['B','2016-01-10',5],['B','2016-04-25',6]], columns = ['cust_id','call_date','call_id'])
print df_calls
cust_id call_date call_id
0 A 2016-02-03 1
1 A 2016-05-11 2
2 A 2016-10-01 3
3 A 2016-11-02 4
4 B 2016-01-10 5
5 B 2016-04-25 6
</code></pre>
<p>and </p>
<pre><code>df_active = pd.DataFrame([['A','2016-01-10','2016-03-15',1],['A','2016-09-10','2016-11-15',2],
['B','2016-01-02','2016-03-17',3]], columns = ['cust_id','service_start','service_end','service_id'])
print df_active
cust_id service_start service_end service_id
0 A 2016-01-10 2016-03-15 1
1 A 2016-09-10 2016-11-15 2
2 B 2016-01-02 2016-03-17 3
</code></pre>
<p>I need to find the service_id each calls belongs to, identified by service_start and service_end dates. If a call does not fall between dates, they should remain in the dataset.</p>
<p>Here's what I tried so far:</p>
<pre><code>df_test_output = pd.merge(df_calls,df_active, how = 'left',on = ['cust_id'])
df_test_output = df_test_output[(df_test_output['call_date']>= df_test_output['service_start'])
& (df_test_output['call_date']<= df_test_output['service_end'])].drop(['service_start','service_end'],axis = 1)
print df_test_output
cust_id call_date call_id service_id
0 A 2016-02-03 1 1
5 A 2016-10-01 3 2
7 A 2016-11-02 4 2
8 B 2016-01-10 5 3
</code></pre>
<p>This drops all the calls that were not between service dates. Any thoughts on how I can merge on the service_id where it meets the criteria, but retain the remaining records? </p>
<p>The result should look like this:</p>
<pre><code>#do black magic
print df_calls
cust_id call_date call_id service_id
0 A 2016-02-03 1 1.0
1 A 2016-05-11 2 NaN
2 A 2016-10-01 3 2.0
3 A 2016-11-02 4 2.0
4 B 2016-01-10 5 3.0
5 B 2016-04-25 6 NaN
</code></pre> | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a> with left join:</p>
<pre><code>print (pd.merge(df_calls, df_calls2, how='left'))
cust_id call_date call_id service_id
0 A 2016-02-03 1 1.0
1 A 2016-05-11 2 NaN
2 A 2016-10-01 3 2.0
3 A 2016-11-02 4 2.0
4 B 2016-01-10 5 3.0
5 B 2016-04-25 6 NaN
</code></pre> | python|pandas | 3 |
95 | 40,471,883 | Split column and format the column values | <p>I am trying to format one column data. I can find options to split the columns as it has <code>,</code> in between but I am not able to format it as shown in output. </p>
<p>Input </p>
<pre><code> TITLE,Issn
NATURE REVIEWS MOLECULAR CELL BIOLOGY,"ISSN 14710072, 14710080"
ANNUAL REVIEW OF IMMUNOLOGY,"ISSN 07320582, 15453278"
NATURE REVIEWS GENETICS,"ISSN 14710056, 14710064"
CA - A CANCER JOURNAL FOR CLINICIANS,"ISSN 15424863, 00079235"
CELL,"ISSN 00928674, 10974172"
ANNUAL REVIEW OF ASTRONOMY AND ASTROPHYSICS,"ISSN 15454282, 00664146"
NATURE REVIEWS IMMUNOLOGY,"ISSN 14741741, 14741733"
NATURE REVIEWS CANCER,ISSN 1474175X
ANNUAL REVIEW OF BIOCHEMISTRY,"ISSN 15454509, 00664154"
REVIEWS OF MODERN PHYSICS,"ISSN 00346861, 15390756"
NATURE GENETICS,ISSN 10614036
</code></pre>
<ol>
<li>Split the issn column to two columns as it has <code>,</code></li>
<li>Delete the word ISSN from column only</li>
<li>leave behind numbers After 4 digits put a <code>-</code></li>
</ol>
<p>Expected output is </p>
<pre><code> TITLE,Issn
NATURE REVIEWS MOLECULAR CELL BIOLOGY,1471-0072, 1471-0080
ANNUAL REVIEW OF IMMUNOLOGY,0732-0582, 1545-3278
NATURE REVIEWS GENETICS,1471-0056, 1471-0064
CA - A CANCER JOURNAL FOR CLINICIANS,1542-4863, 0007-9235
CELL,0092-8674, 1097-4172
ANNUAL REVIEW OF ASTRONOMY AND ASTROPHYSICS,1545-4282, 0066-4146
NATURE REVIEWS IMMUNOLOGY,1474-1741, 1474-1733
NATURE REVIEWS CANCER, 1474-175X
ANNUAL REVIEW OF BIOCHEMISTRY,1545-4509, 0066-4154
REVIEWS OF MODERN PHYSICS,0034-6861, 1539-0756
NATURE GENETICS,1061-4036
</code></pre>
<p>Any suggestion with pandas are appreciated .. Thanks in advance </p>
<p><strong>Update:</strong><br>
When trying to run both the programs as mentioned in answer </p>
<pre><code>import pandas as pd
import re
df = pd.read_csv('new_journal_list.csv', header='TITLE,Issn')
'''
df_split_num = df['Issn'].map(lambda x: x.split('ISSN ')[1].split(', '))
df_dash_num = df_split_num.map(lambda x: [num[:4] + '-' + num[4:] for num in x])
df_split_issn = pd.DataFrame(data=list(df_dash_num), columns=['Issn1', 'Issn2'])
df[['Issn1', 'Issn2']] = df_split_issn
del df['Issn']
print df
'''
df[['Issn1','Issn2']] = (df.pop('Issn').str.extract('ISSN\s+([^,]+),?\s?(.*)', expand=True)
.apply(lambda x: x.str[:4]+'-'+x.str[4:]).replace(r'^-$', '', regex=True))
print df
</code></pre>
<p>Either cases when run in default python 2.7 I am getting following error </p>
<pre><code>Traceback (most recent call last):
File "clean_journal_list.py", line 1, in <module>
import pandas as pd
File "/usr/local/lib/python2.7/dist-packages/pandas/__init__.py", line 25, in <module>
from pandas import hashtable, tslib, lib
File "pandas/src/numpy.pxd", line 157, in init pandas.hashtable (pandas/hashtable.c:38364)
</code></pre>
<p>When run in python 3.4 the below given error is seen </p>
<pre><code>File "clean_journal_list.py", line 21
print df
^
SyntaxError: invalid syntax
</code></pre> | <p>IIUC you can do it using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer">Series.str.extract()</a>, <code>apply()</code> and <code>replace()</code> methods:</p>
<pre><code>In [33]: df
Out[33]:
TITLE Issn
0 NATURE REVIEWS MOLECULAR CELL BIOLOGY ISSN 14710072, 14710080
1 ANNUAL REVIEW OF IMMUNOLOGY ISSN 07320582, 15453278
2 NATURE REVIEWS GENETICS ISSN 14710056, 14710064
3 CA - A CANCER JOURNAL FOR CLINICIANS ISSN 15424863, 00079235
4 CELL ISSN 00928674, 10974172
5 ANNUAL REVIEW OF ASTRONOMY AND ASTROPHYSICS ISSN 15454282, 00664146
6 NATURE REVIEWS IMMUNOLOGY ISSN 14741741, 14741733
7 NATURE REVIEWS CANCER ISSN 1474175X
8 ANNUAL REVIEW OF BIOCHEMISTRY ISSN 15454509, 00664154
9 REVIEWS OF MODERN PHYSICS ISSN 00346861, 15390756
10 NATURE GENETICS ISSN 10614036
In [34]: df[['Issn1','Issn2']] = (df.pop('Issn')
...: .str.extract('ISSN\s+([^,]+),?\s?(.*)', expand=True)
...: .apply(lambda x: x.str[:4]+'-'+x.str[4:])
...: .replace(r'^-$', '', regex=True))
...:
In [35]: df
Out[35]:
TITLE Issn1 Issn2
0 NATURE REVIEWS MOLECULAR CELL BIOLOGY 1471-0072 1471-0080
1 ANNUAL REVIEW OF IMMUNOLOGY 0732-0582 1545-3278
2 NATURE REVIEWS GENETICS 1471-0056 1471-0064
3 CA - A CANCER JOURNAL FOR CLINICIANS 1542-4863 0007-9235
4 CELL 0092-8674 1097-4172
5 ANNUAL REVIEW OF ASTRONOMY AND ASTROPHYSICS 1545-4282 0066-4146
6 NATURE REVIEWS IMMUNOLOGY 1474-1741 1474-1733
7 NATURE REVIEWS CANCER 1474-175X
8 ANNUAL REVIEW OF BIOCHEMISTRY 1545-4509 0066-4154
9 REVIEWS OF MODERN PHYSICS 0034-6861 1539-0756
10 NATURE GENETICS 1061-4036
</code></pre> | python|csv|pandas|dataframe|data-cleaning | 2 |
96 | 61,879,043 | Remove NaN values from pandas dataframes inside a list | <p>I have a number of dataframes inside a list. I am trying to remove NaN values. I tried to do it in a for loop:</p>
<pre><code>for i in list_of_dataframes:
i.dropna()
</code></pre>
<p>it didn't work but python didnt return an error neither. If I apply the code </p>
<pre><code>list_of_dataframes[0] = list_of_dataframes[0].dropna()
</code></pre>
<p>to a each dataframe individually it works, but i have too many of them. There must be a way which I just can't figure out. What are the possible solutions?</p>
<p>Thanks a lot</p> | <p>You didn't assign the new DataFrames with the dropped values to anything, so there was no effect.</p>
<p>Try this:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(len(list_of_dataframes)):
list_of_dataframes[i] = list_of_dataframes[i].dropna()
</code></pre>
<p>Or, more conveniently:</p>
<pre class="lang-py prettyprint-override"><code>for df in list_of_dataframes:
df.dropna(inplace=True)
</code></pre> | pandas|nested-lists|nested-datalist | 0 |
97 | 61,727,806 | Concatenate alternate scalar column to pandas based on condition | <p>Have a <code>master</code> dataframe and a <code>tag</code> list, as follows:</p>
<pre><code>import pandas as pd
i = ['A'] * 2 + ['B'] * 3 + ['A'] * 4 + ['B'] * 5
master = pd.DataFrame(i, columns={'cat'})
tag = [0, 1]
</code></pre>
<p>How to insert a column of tags that is normal for cat: A, but reversed for cat: B? Expected output is:</p>
<pre><code> cat tags
0 A 0
1 A 1
2 B 1
3 B 0
4 B 1
5 A 0
6 A 1
7 A 0
8 A 1
9 B 1
10 B 0
...
</code></pre> | <p>EDIT: Because is necessary processing each concsecutive group separately I try create general solution:</p>
<pre><code>tag = ['a','b','c']
r = range(len(tag))
r1 = range(len(tag)-1, -1, -1)
print (dict(zip(r1, tag)))
{2: 'a', 1: 'b', 0: 'c'}
m1 = master['cat'].eq('A')
m2 = master['cat'].eq('B')
s = master['cat'].ne(master['cat'].shift()).cumsum()
master['tags'] = master.groupby(s).cumcount() % len(tag)
master.loc[m1, 'tags'] = master.loc[m1, 'tags'].map(dict(zip(r, tag)))
master.loc[m2, 'tags'] = master.loc[m2, 'tags'].map(dict(zip(r1, tag)))
print (master)
cat tags
0 A a
1 A b
2 B c
3 B b
4 B a
5 A a
6 A b
7 A c
8 A a
9 B c
10 B b
11 B a
12 B c
13 B b
</code></pre>
<p>Another approach is create <code>DataFrame</code> from tags and <code>merge</code> with left join:</p>
<pre><code>tag = ['a','b','c']
s = master['cat'].ne(master['cat'].shift()).cumsum()
master['g'] = master.groupby(s).cumcount() % len(tag)
d = {'A': tag, 'B':tag[::-1]}
df = pd.DataFrame([(k,i,x)
for k, v in d.items()
for i, x in enumerate(v)], columns=['cat','g','tags'])
print (df)
cat g tags
0 A 0 a
1 A 1 b
2 A 2 c
3 B 0 c
4 B 1 b
5 B 2 a
</code></pre>
<hr>
<pre><code>master = master.merge(df, on=['cat','g'], how='left').drop('g', axis=1)
print (master)
cat tags
0 A a
1 A b
2 B c
3 B b
4 B a
5 A a
6 A b
7 A c
8 A a
9 B c
10 B b
11 B a
12 B c
13 B b
</code></pre>
<p>Idea is use <a href="https://numpy.org/doc/stable/reference/generated/numpy.tile.html?highlight=tile#numpy.tile" rel="nofollow noreferrer"><code>numpy.tile</code></a> for repeat <code>tag</code> values by number of matched values with integer division and then filtering by indexing and assign by both masks:</p>
<pre><code>le = len(tag)
m1 = master['cat'].eq('A')
m2 = master['cat'].eq('B')
s1 = m1.sum()
s2 = m2.sum()
master.loc[m1, 'tags'] = np.tile(tag, s1 // le + le)[:s1]
#swapped order for m2 mask
master.loc[m2, 'tags'] = np.tile(tag[::-1], s2// le + le)[:s2]
print (master)
cat tags
0 A 0.0
1 A 1.0
2 B 1.0
3 B 0.0
4 B 1.0
5 A 0.0
6 A 1.0
7 A 0.0
8 A 1.0
</code></pre> | pandas | 2 |
98 | 58,123,825 | Input to reshape is a tensor with 'batch_size' values, but the requested shape requires a multiple of 'n_features' | <p>I'm trying to make own attention model and I found example code in here:
<a href="https://www.kaggle.com/takuok/bidirectional-lstm-and-attention-lb-0-043" rel="nofollow noreferrer">https://www.kaggle.com/takuok/bidirectional-lstm-and-attention-lb-0-043</a></p>
<p>and it works just fine when I run it without modification.</p>
<p>But my own data contain only numeric values, I had to change example code. </p>
<p>so I erase embedding part in example code and plus, this is what I fixed.</p>
<pre class="lang-py prettyprint-override"><code>xtr = np.reshape(xtr, (xtr.shape[0], 1, xtr.shape[1]))
# xtr.shape() = (n_sample_train, 1, 150), y.shape() = (n_sample_train, 6)
xte = np.reshape(xte, (xte.shape[0], 1, xte.shape[1]))
# xtr.shape() = (n_sample_test, 1, 150)
model = BidLstm(maxlen, max_features)
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
</code></pre>
<p>and my BidLstm func looks like,</p>
<pre class="lang-py prettyprint-override"><code>
def BidLstm(maxlen, max_features):
inp = Input(shape=(1,150))
#x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp) -> I don't need embedding since my own data is numeric.
x = Bidirectional(LSTM(300, return_sequences=True, dropout=0.25,
recurrent_dropout=0.25))(inp)
x = Attention(maxlen)(x)
x = Dense(256, activation="relu")(x)
x = Dropout(0.25)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
return model
</code></pre>
<p>and it said,</p>
<pre class="lang-py prettyprint-override"><code>InvalidArgumentErrorTraceback (most recent call last)
<ipython-input-62-929955370368> in <module>
29
30 early = EarlyStopping(monitor="val_loss", mode="min", patience=1)
---> 31 model.fit(xtr, y, batch_size=128, epochs=15, validation_split=0.1, callbacks=[early])
32 #model.fit(xtr, y, batch_size=256, epochs=1, validation_split=0.1)
33
/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1037 initial_epoch=initial_epoch,
1038 steps_per_epoch=steps_per_epoch,
-> 1039 validation_steps=validation_steps)
1040
1041 def evaluate(self, x=None, y=None,
/usr/local/lib/python3.5/dist-packages/keras/engine/training_arrays.py in fit_loop(model, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
197 ins_batch[i] = ins_batch[i].toarray()
198
--> 199 outs = f(ins_batch)
200 outs = to_list(outs)
201 for l, o in zip(out_labels, outs):
/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
2713 return self._legacy_call(inputs)
2714
-> 2715 return self._call(inputs)
2716 else:
2717 if py_any(is_tensor(x) for x in inputs):
/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py in _call(self, inputs)
2673 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
2674 else:
-> 2675 fetched = self._callable_fn(*array_vals)
2676 return fetched[:len(self.outputs)]
2677
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1437 ret = tf_session.TF_SessionRunCallable(
1438 self._session._session, self._handle, args, status,
-> 1439 run_metadata_ptr)
1440 if run_metadata:
1441 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
526 None, None,
527 compat.as_text(c_api.TF_Message(self.status.status)),
--> 528 c_api.TF_GetCode(self.status.status))
529 # Delete the underlying status object from memory otherwise it stays alive
530 # as there is a reference to status from this from the traceback due to
InvalidArgumentError: Input to reshape is a tensor with 128 values, but the requested shape requires a multiple of 150
[[{{node attention_16/Reshape_2}}]]
[[{{node loss_5/mul}}]]
</code></pre>
<p>I think something wrong in loss function saids in here:
<a href="https://stackoverflow.com/questions/42115585/input-to-reshape-is-a-tensor-with-2-batch-size-values-but-the-requested-sha">Input to reshape is a tensor with 2 * "batch_size" values, but the requested shape has "batch_size"</a></p>
<p>but I don't know which part to fix it.</p>
<p>my keras and tensorflow versions are 2.2.4 and 1.13.0-rc0</p>
<p>please help. thanks.</p>
<p><strong>Edit 1</strong></p>
<p>I've change my batch size, like keras saids, multiple of 150(batch_size = 150). than it reports</p>
<pre class="lang-py prettyprint-override"><code>Train on 143613 samples, validate on 15958 samples
Epoch 1/15
143400/143613 [============================>.] - ETA: 0s - loss: 0.1505 - acc: 0.9619
InvalidArgumentError: Input to reshape is a tensor with 63 values, but the requested shape requires a multiple of 150
[[{{node attention_18/Reshape_2}}]]
[[{{node metrics_6/acc/Mean_1}}]]
</code></pre>
<p>and details is same as before. what should I do?</p> | <p>Your input shape must be <code>(150,1)</code>. </p>
<p>LSTM shapes are <code>(batch, steps, features)</code>. It's pointless to use LSTMs with 1 step only. (Unless you are using custom training loops with <code>stateful=True</code>, which is not your case). </p> | python-3.x|numpy|tensorflow|keras | 1 |
99 | 57,998,473 | np.vectorize for TypeError: only size-1 arrays can be converted to Python scalars | <p>I tried to vectortorize as per previous questions but still doesn't work.</p>
<pre><code>import numpy as np
import math
S0 = 50
k_list = np.linspace(S0 * 0.6, S0 * 1.4, 50)
K=k_list
d1 = np.vectorize(math.log(S0 / K))
print(d1)
</code></pre> | <pre><code>In [141]: import math
...:
...:
...: S0 = 50
...:
...: k_list = np.linspace(S0 * 0.6, S0 * 1.4, 50)
...:
...: K=k_list
...:
...: d1 = np.vectorize(math.log(S0 / K))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-141-5d2a0f276bc5> in <module>
8 K=k_list
9
---> 10 d1 = np.vectorize(math.log(S0 / K))
TypeError: only size-1 arrays can be converted to Python scalars
</code></pre>
<p>The <code>np.vectorized</code> argument is not a function, and in fact produces the error:</p>
<pre><code>In [142]: math.log(S0 / K)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-142-dedf1ab558ff> in <module>
----> 1 math.log(S0 / K)
TypeError: only size-1 arrays can be converted to Python scalars
</code></pre>
<p>The <code>log</code> argument is an array. <code>math.log</code> only works for 1 number, not an array:</p>
<pre><code>In [143]: S0 / K
Out[143]:
array([1.66666667, 1.62251656, 1.58064516, 1.5408805 , 1.50306748,
1.46706587, 1.43274854, 1.4 , 1.36871508, 1.33879781,
1.31016043, 1.28272251, 1.25641026, 1.23115578, 1.20689655,
1.18357488, 1.16113744, 1.13953488, 1.11872146, 1.09865471,
1.07929515, 1.06060606, 1.04255319, 1.0251046 , 1.00823045,
0.99190283, 0.97609562, 0.96078431, 0.94594595, 0.93155894,
0.917603 , 0.90405904, 0.89090909, 0.8781362 , 0.86572438,
0.85365854, 0.8419244 , 0.83050847, 0.81939799, 0.80858086,
0.7980456 , 0.78778135, 0.77777778, 0.76802508, 0.75851393,
0.74923547, 0.74018127, 0.73134328, 0.72271386, 0.71428571])
</code></pre>
<p><code>np.log</code> does work with an array input:</p>
<pre><code>In [145]: np.log(S0 / K)
Out[145]:
array([ 0.51082562, 0.48397837, 0.45783309, 0.43235401, 0.40750801,
0.3832644 , 0.35959465, 0.33647224, 0.3138724 , 0.29177206,
0.27014959, 0.24898478, 0.22825865, 0.20795339, 0.18805223,
0.16853942, 0.14940008, 0.13062018, 0.11218648, 0.09408644,
0.07630819, 0.0588405 , 0.0416727 , 0.02479466, 0.00819677,
-0.00813013, -0.02419473, -0.04000533, -0.05556985, -0.07089582,
-0.08599045, -0.10086061, -0.11551289, -0.12995357, -0.14418869,
-0.15822401, -0.17206506, -0.18571715, -0.19918536, -0.21247459,
-0.22558954, -0.2385347 , -0.25131443, -0.26393289, -0.27639411,
-0.28870196, -0.30086016, -0.31287232, -0.3247419 , -0.33647224])
</code></pre>
<p>The correct way to use <code>vectorize</code> (if there is such a thing :) ), is:</p>
<pre><code>In [146]: d1 = np.vectorize(lambda k: math.log(S0 / k))
In [147]: d1(K)
Out[147]:
array([ 0.51082562, 0.48397837, 0.45783309, 0.43235401, 0.40750801,
0.3832644 , 0.35959465, 0.33647224, 0.3138724 , 0.29177206,
0.27014959, 0.24898478, 0.22825865, 0.20795339, 0.18805223,
0.16853942, 0.14940008, 0.13062018, 0.11218648, 0.09408644,
0.07630819, 0.0588405 , 0.0416727 , 0.02479466, 0.00819677,
-0.00813013, -0.02419473, -0.04000533, -0.05556985, -0.07089582,
-0.08599045, -0.10086061, -0.11551289, -0.12995357, -0.14418869,
-0.15822401, -0.17206506, -0.18571715, -0.19918536, -0.21247459,
-0.22558954, -0.2385347 , -0.25131443, -0.26393289, -0.27639411,
-0.28870196, -0.30086016, -0.31287232, -0.3247419 , -0.33647224])
</code></pre> | python|python-3.x|numpy | 1 |