Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
100
58,107,632
Speed up search of array element in second array
<p>I have a pretty simple operation involving two not so large arrays:</p> <ol> <li>For every element in the first (larger) array, located in position <code>i</code></li> <li>Find if it exists in the second (smaller) array</li> <li>If it does, find its index in the second array: <code>j</code></li> <li>Store a float taken from a third array (same length as first array) in the position <code>i</code>, in the position <code>j</code> of a fourth array (same length as second array)</li> </ol> <p>The <code>for</code> block below works, but gets <strong>very</strong> slow for not so large arrays (>10000).</p> <p>Can this implementation be made faster?</p> <pre><code>import numpy as np import random ############################################## # Generate some random data. #'Nb' is always smaller then 'Na Na, Nb = 50000, 40000 # List of IDs (could be any string, I use integers here for simplicity) ids_a = random.sample(range(1, Na * 10), Na) ids_a = [str(_) for _ in ids_a] random.shuffle(ids_a) # Some floats associated to these IDs vals_in_a = np.random.uniform(0., 1., Na) # Smaller list of repeated IDs from 'ids_a' ids_b = random.sample(ids_a, Nb) # Array to be filled vals_in_b = np.zeros(Nb) ############################################## # This block needs to be *a lot* more efficient # # For each string in 'ids_a' for i, id_a in enumerate(ids_a): # if it exists in 'ids_b' if id_a in ids_b: # find where in 'ids_b' this element is located j = ids_b.index(id_a) # store in that position the value taken from 'ids_a' vals_in_b[j] = vals_in_a[i] </code></pre>
<p>In defense of my approach, here is the authoritative implementation:</p> <pre><code>import itertools as it def pp(): la,lb = len(ids_a),len(ids_b) ids = np.fromiter(it.chain(ids_a,ids_b),'&lt;S6',la+lb) unq,inv = np.unique(ids,return_inverse=True) vals = np.empty(la,vals_in_a.dtype) vals[inv[:la]] = vals_in_a return vals[inv[la:]] (juanpa()==pp()).all() # True timeit(juanpa,number=100) # 3.1373191522434354 timeit(pp,number=100) # 2.5256317732855678 </code></pre> <p>That said, @juanpa.arrivillaga's suggestion can also be implemented better:</p> <pre><code>import operator as op def ja(): return op.itemgetter(*ids_b)(dict(zip(ids_a,vals_in_a))) (ja()==pp()).all() # True timeit(ja,number=100) # 2.015202699229121 </code></pre>
python|arrays|performance|numpy
1
101
34,157,574
Complex integration in Python
<p>There is a MATLAB function <a href="http://uk.mathworks.com/help/matlab/ref/quadgk.html" rel="nofollow noreferrer"><code>quadgk</code></a> that can compute complex integrals, or at least functions with poles and singularities. In Python, there is a general-purpose <code>scipy.integrate.quad</code> which is handy for integration along the real axis. Is there a Python equivalent to MATLAB's <code>quadgk</code>? All I could find was <a href="https://stackoverflow.com/questions/5965583/use-scipy-integrate-quad-to-integrate-complex-numbers">dr jimbob's code</a> in another SO question, which doesn't seem to work on Python 3.4 any more.</p>
<p>I don't think SciPy does provide an equivalent of MATLAB's <code>quadgk</code>, but for what it's worth the code you link to in <a href="https://stackoverflow.com/questions/5965583/use-scipy-integrate-quad-to-integrate-complex-numbers">this question</a> can be made to work in Python 3 with only minor changes:</p> <pre><code>import scipy from scipy import array def quad_routine(func, a, b, x_list, w_list): c_1 = (b-a)/2.0 c_2 = (b+a)/2.0 eval_points = map(lambda x: c_1*x+c_2, x_list) func_evals = list(map(func, eval_points)) # Python 3: make a list here return c_1 * sum(array(func_evals) * array(w_list)) def quad_gauss_7(func, a, b): x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759] w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870]) return quad_routine(func,a,b,x_gauss, w_gauss) def quad_kronrod_15(func, a, b): x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813] w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529] return quad_routine(func,a,b,x_kr, w_kr) class Memoize: # Python 3: no need to inherit from object def __init__(self, func): self.func = func self.eval_points = {} def __call__(self, *args): if args not in self.eval_points: self.eval_points[args] = self.func(*args) return self.eval_points[args] def quad(func,a,b): ''' Output is the 15 point estimate; and the estimated error ''' func = Memoize(func) # Memoize function to skip repeated function calls. g7 = quad_gauss_7(func,a,b) k15 = quad_kronrod_15(func,a,b) # I don't have much faith in this error estimate taken from wikipedia # without incorporating how it should scale with changing limits return [k15, (200*scipy.absolute(g7-k15))**1.5] </code></pre> <p>For example,</p> <pre><code>print(quad(lambda x: scipy.exp(1j*x), 0,scipy.pi/2.0)) [(0.99999999999999711+0.99999999999999689j), 9.6120083407040365e-19] </code></pre>
python|matlab|function|numpy|scipy
1
102
34,326,939
OpenCV createCalibrateDebevec.process is giving me "dst is not a numpy array, neither a scalar"
<p>I'm currently trying to do some HDR processing with OpenCV's python wrapper.</p> <pre><code>import cv2 import numpy as np img = cv2.imread("1.jpg") img2 = cv2.imread("2.jpg") img3 = cv2.imread("3.jpg") images = [img, img2, img3] times = [-2, 0, 2] response = np.zeros(256) import ipdb; ipdb.set_trace() calibrate = cv2.createCalibrateDebevec() calibrate.process(images, response, times) ipdb&gt; calibrate.process(images, response, times) *** TypeError: dst is not a numpy array, neither a scalar </code></pre> <p>It says that dst or 'response' in my code based on the position is not an numpy array but checking the type of 'response', it clearly says it is.</p> <pre><code>ipdb&gt; type(response) &lt;type 'numpy.ndarray'&gt; </code></pre>
<p>You should use call</p> <pre><code>calibrate.process(images, times, response) </code></pre> <p>or</p> <pre><code>response = calibrate.process(images, times) </code></pre> <p>instead of</p> <pre><code>calibrate.process(images, response, times) </code></pre> <p>because python <code>CalibrateDebevec</code>'s <code>process</code> member signature is the following:</p> <pre><code>process(src, times[, dst]) -&gt; dst </code></pre> <p>It can be simply determined with the following:</p> <pre><code>import inspect print(inspect.getdoc(calibrate.process)) </code></pre>
python|c++|arrays|opencv|numpy
1
103
37,140,223
How to sort data frame by column values?
<p>I am relatively new to python and pandas data frames so maybe I have missed something very easy here. So I was having data frame with many rows and columns but at the end finally manage to get only one row with maximum value from each column. I used this code to do that:</p> <pre><code>import pandas as pd d = {'A' : [1.2, 2, 4, 6], 'B' : [2, 8, 10, 12], 'C' : [5, 3, 4, 5], 'D' : [3.5, 9, 1, 11], 'E' : [5, 8, 7.5, 3], 'F' : [8.8, 4, 3, 2]} df = pd.DataFrame(d, index=['a', 'b', 'c', 'd']) print df Out: A B C D E F a 1.2 2 5 3.5 5.0 8.8 b 2.0 8 3 9.0 8.0 4.0 c 4.0 10 4 1.0 7.5 3.0 d 6.0 12 5 11.0 3.0 2.0 </code></pre> <p>Then to choose max value from each column I used this function:</p> <pre><code>def sorted(s, num): tmp = s.order(ascending=False)[:num] tmp.index = range(num) return tmp NewDF=df.apply(lambda x: sorted(x, 1)) print NewDF Out: A B C D E F 0 6.0 12 5 11.0 8.0 8.8 </code></pre> <p>Yes, I lost row labels (indexes whatever) but this column labels are more important for me to retain. Now I just need to sort columns I need top 5 columns based on values inside them, I need this output:</p> <pre><code>Out: B D F E A 0 12.0 11 8.8 8.0 6.0 </code></pre> <p>I was looking for a solution but with no luck. The best I found for sorting by columns is print NewDF.sort(axis=1) but nothing happens.</p> <p>Edit: Ok, I found one way but with transformation:</p> <pre><code>transposed = NewDF.T print(transposed.sort([0], ascending=False)) </code></pre> <p>Is this the only possible way to do it?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.max.html"><code>max</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html"><code>nlargest</code></a>, because <code>nlargest</code> sorts output:</p> <pre><code>print df.max().nlargest(5) B 12.0 D 11.0 F 8.8 E 8.0 A 6.0 dtype: float64 </code></pre> <p>And then convert to <code>DataFrame</code>:</p> <pre><code>print pd.DataFrame(df.max().nlargest(5)).T B D F E A 0 12.0 11.0 8.8 8.0 6.0 </code></pre> <p>EDIT:</p> <p>If you need sort one row <code>DataFrame</code>:</p> <pre><code>print NewDF.T.sort_values(0, ascending=False) 0 B 12.0 D 11.0 F 8.8 E 8.0 A 6.0 C 5.0 </code></pre> <p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html"><code>apply</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sort_values.html"><code>sort_values</code></a>:</p> <pre><code>print NewDF.apply(lambda x: x.sort_values(ascending=False), axis=1) B D F E A C 0 12.0 11.0 8.8 8.0 6.0 5.0 </code></pre>
python-2.7|pandas|dataframe
6
104
36,897,366
pandas to_html using the .style options or custom CSS?
<p>I was following <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="noreferrer">the style guide for pandas</a> and it worked pretty well. </p> <p>How can I keep these styles using the to_html command through Outlook? The documentation seems a bit lacking for me.</p> <pre><code>(df.style .format(percent) .applymap(color_negative_red, subset=['col1', 'col2']) .set_properties(**{'font-size': '9pt', 'font-family': 'Calibri'}) .bar(subset=['col4', 'col5'], color='lightblue')) import win32com.client as win32 outlook = win32.Dispatch('outlook.application') mail = outlook.CreateItem(0) mail.Subject = subject_name mail.HTMLbody = ('&lt;html&gt;&lt;body&gt;&lt;p&gt;&lt;body style="font-size:11pt; font-family:Calibri"&gt;Hello,&lt;/p&gt; + '&lt;p&gt;Title of Data&lt;/p&gt;' + df.to_html( index=False, classes=????????) '&lt;/body&gt;&lt;/html&gt;') mail.send </code></pre> <p>The to_html documentation shows that there is a classes command that I can put inside of the to_html method, but I can't figure it out. It also seems like my dataframe does not carry the style that I specified up top. </p> <p>If I try:</p> <pre><code> df = (df.style .format(percent) .applymap(color_negative_red, subset=['col1', 'col2']) .set_properties(**{'font-size': '9pt', 'font-family': 'Calibri'}) .bar(subset=['col4', 'col5'], color='lightblue')) </code></pre> <p>Then df is now a Style object and you can't use to_html.</p> <p>Edit - this is what I am currently doing to modify my tables. This works, but I can't keep the cool features of the .style method that pandas offers.</p> <pre><code>email_paragraph = """ &lt;body style= "font-size:11pt; font-family:Calibri; text-align:left; margin: 0px auto" &gt; """ email_caption = """ &lt;body style= "font-size:10pt; font-family:Century Gothic; text-align:center; margin: 0px auto" &gt; """ email_style = '''&lt;style type="text/css" media="screen" style="width:100%"&gt; table, th, td {border: 0px solid black; background-color: #eee; padding: 10px;} th {background-color: #C6E2FF; color:black; font-family: Tahoma;font-size : 13; text-align: center;} td {background-color: #fff; padding: 10px; font-family: Calibri; font-size : 12; text-align: center;} &lt;/style&gt;''' </code></pre>
<p>Once you add <code>style</code> to your chained assignments you are operating on a <code>Styler</code> object. That object has a <code>render</code> method to get the html as a string. So in your example, you could do something like this:</p> <pre><code>html = ( df.style .format(percent) .applymap(color_negative_red, subset=['col1', 'col2']) .set_properties(**{'font-size': '9pt', 'font-family': 'Calibri'}) .bar(subset=['col4', 'col5'], color='lightblue') .render() ) </code></pre> <p>Then include the <code>html</code> in your email instead of a <code>df.to_html()</code>.</p>
python|pandas|pandas-styles
45
105
54,962,758
Rotating a Geoplot polyplot
<p>I currently have the function:</p> <pre><code>def cross_country(contiguous_usa, full_geo_data): full_geo_data['Coordinates'] = full_geo_data[['longitude', 'latitude']].values.tolist() full_geo_data['Coordinates'] = full_geo_data['Coordinates'].apply(Point) full_geo_data = gpd.GeoDataFrame(full_geo_data, geometry='Coordinates') fig = plt.figure(figsize=(10,15)) ax1 = plt.subplot(212, projection=gcrs.AlbersEqualArea(central_latitude=-98, central_longitude=39.5)) gplt.kdeplot(full_geo_data[full_geo_data['speedkmh'] == 0], projection=gcrs.AlbersEqualArea(), cmap="cool", clip=contiguous_usa.geometry, ax=ax1) gplt.polyplot(contiguous_usa, projection=gcrs.AlbersEqualArea(), ax=ax1) plt.title("Test") cross_country(contiguous_usa, full_geo_data) </code></pre> <p>It works fine however, when I run it the map comes out as such:</p> <p><a href="https://i.stack.imgur.com/4uRbx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4uRbx.png" alt="The orientation is not nominal"></a></p> <p>I know its a trivial thing to ask, but I've looked into the documentation and I can't find anything that relates to changing basic orientation, for literally just rotating the plot.</p>
<p>The easiest answer likely involves the projection or crs you are using. However, if you can't get that to work, you can use shapely to modify individual rows of the geodataframe. </p> <pre><code>def rotator(row): row['geometry'] = shapely.affinity.rotate(row['geometry'], -90) return row full_geo_data = full_geo_data.apply(rotator, axis = 1) </code></pre>
python|pandas|geopandas
0
106
54,814,133
Catch numpy ComplexWarning as Exception
<p>Consider the following example:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.array([1.0, 2.1j]) &gt;&gt;&gt; b = np.array(a, dtype=np.float64) /Users/goerz/anaconda3/bin/ipython:1: ComplexWarning: Casting complex values to real discards the imaginary part #!/Users/goerz/anaconda3/bin/python </code></pre> <p>How can I catch the ComplexWarning as an Exception?</p> <p>I have tried <code>np.seterr</code>, but this has no effect (as it only relates to floating point warnings such as underflows/overflows).</p> <p>I've also tried the <code>with warnings.catch_warnings():</code> context manager from the standard library, but it also has no effect.</p>
<p>Using <a href="https://docs.python.org/3/library/warnings.html#the-warnings-filter" rel="nofollow noreferrer">stdlib warnings filter</a> causes these to raise instead of print:</p> <pre><code>&gt;&gt;&gt; warnings.filterwarnings(action="error", category=np.ComplexWarning) &gt;&gt;&gt; b = np.array(a, dtype=np.float64) ComplexWarning: Casting complex values to real discards the imaginary part </code></pre> <p>You can reset it to default filters with <a href="https://docs.python.org/3/library/warnings.html#warnings.resetwarnings" rel="nofollow noreferrer"><code>warnings.resetwarnings</code></a>.</p>
python|numpy
4
107
73,416,293
Python Protocol for Building a `pandas.DataFrame`
<p>Hello SO and community!</p> <p>Guess, my question somewhat resonates with <a href="https://stackoverflow.com/questions/72798903/is-there-a-way-to-specify-a-protocol-for-a-pandas-dataframe">this one</a>.</p> <p>However, trust the below task is a little bit different from that referenced above, namely to extract, transform, load data utilizing <code>pandas.DataFrame</code>, and I am stuck implementing <code>Protocol</code> for the purpose.</p> <p>The code is below:</p> <pre><code>import io import pandas as pd import re import requests from functools import cache from typing import Protocol from zipfile import ZipFile from pandas import DataFrame @cache def extract_can_from_url(url: str, **kwargs) -&gt; DataFrame: ''' Returns DataFrame from downloaded zip file from url Parameters ---------- url : str url to download from. **kwargs : TYPE additional arguments to pass to pd.read_csv(). Returns ------- DataFrame ''' name = url.split('/')[-1] if os.path.exists(name): with ZipFile(name, 'r').open(name.replace('-eng.zip', '.csv')) as f: return pd.read_csv(f, **kwargs) else: r = requests.get(url) with ZipFile(io.BytesIO(r.content)).open(name.replace('-eng.zip', '.csv')) as f: return pd.read_csv(f, **kwargs) class ETL(Protocol): # ============================================================================= # Maybe Using these items for dataclass: # url: str # meta: kwargs(default_factory=dict) # ============================================================================= def __init__(self, url: str, **kwargs) -&gt; None: return None def download(self) -&gt; DataFrame: return DataFrame def retrieve_series_ids(self) -&gt; list[str]: return list[str] def transform(self) -&gt; DataFrame: return DataFrame def sum_up_series_ids(self) -&gt; DataFrame: return DataFrame class ETLCanadaFixedAssets(ETL): def __init__(self, url: str, **kwargs) -&gt; None: self.url = url self.kwargs = kwargs @cache def download(self) -&gt; DataFrame: self.df = extract_can_from_url(URL, index_col=0, usecols=range(14)) return self.df def retrieve_series_ids(self) -&gt; list[str]: # ========================================================================= # Columns Specific to URL below, might be altered # ========================================================================= self._columns = { &quot;Prices&quot;: 0, &quot;Industry&quot;: 1, &quot;Flows and stocks&quot;: 2, &quot;VECTOR&quot;: 3, } self.df_cut = self.df.loc[:, tuple(self._columns)] _q = (self.df_cut.iloc[:, 0].str.contains('2012 constant prices')) &amp; \ (self.df_cut.iloc[:, 1].str.contains('manufacturing', flags=re.IGNORECASE)) &amp; \ (self.df_cut.iloc[:, 2] == 'Linear end-year net stock') self.df_cut = self.df_cut[_q] self.series_ids = sorted(set(self.df_cut.iloc[:, -1])) return self.series_ids def transform(self) -&gt; DataFrame: # ========================================================================= # Columns Specific to URL below, might be altered # ========================================================================= self._columns = { &quot;VECTOR&quot;: 0, &quot;VALUE&quot;: 1, } self.df = self.df.loc[:, tuple(self._columns)] self.df = self.df[self.df.iloc[:, 0].isin(self.series_ids)] return self.df def sum_up_series_ids(self) -&gt; DataFrame: self.df = pd.concat( [ self.df[self.df.iloc[:, 0] == series_id].iloc[:, [1]] for series_id in self.series_ids ], axis=1 ) self.df.columns = self.series_ids self.df['sum'] = self.df.sum(axis=1) return self.df.iloc[:, [-1]] </code></pre> <p><em><strong>UPD</strong></em></p> <p>Instantiating the class <code>ETLCanadaFixedAssets</code></p> <pre><code>df = ETLCanadaFixedAssets(URL, index_col=0, usecols=range(14)).download().retrieve_series_ids().transform().sum_up_series_ids() </code></pre> <p>returns an error, however, expected:</p> <pre><code>AttributeError: 'DataFrame' object has no attribute 'retrieve_series_ids' </code></pre> <p>Please can anyone provide a guidance for how to put these things together (namely how to retrieve the <code>DataFrame</code> which might have been retrieved otherwise using the procedural approach by calling the functions within the last <code>class</code> as they appear within the latter) and point at those mistakes which were made above?</p> <p>Probably, there is another way to do this elegantly using injection.</p> <p>Thank you very much in advance!</p>
<p>All the functions of ETLCanadaFixedAssets and ETL classes should return self. This will allow you to call the functions of the class on the return value of the functions, so you can chain them together. You could add one more function that retrieves the encapsulated dataframe but that will always be called last, as the moment you call this function you cannot chain other functions any more. What you are trying to build is called fluent API you may read more about it <a href="https://java-design-patterns.com/patterns/fluentinterface/#:%7E:text=Fluent%20Interface%20pattern%20provides%20easily%20readable%20flowing%20interface%20to%20code.&amp;text=In%20software%20engineering%2C%20a%20fluent,%2Dspecific%20language%20(DSL)." rel="nofollow noreferrer">here</a></p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>class ETL(Protocol): def download(self) -&gt; ETL: ... def retrieve_series_ids(self) -&gt; ETL: ... def transform(self) -&gt; ETL: ... def sum_up_series_ids(self) -&gt; ETL: ... @property def dataframe(self) -&gt; DataFrame: ... </code></pre> <p>Note you will need the following import line to be able to use the class annotation inside the class definition</p> <p><code>from __future__ import annotations</code></p>
python|pandas|dataframe|protocols
1
108
35,261,581
Change format for data imported from file in Python
<p>My data file is Tab separated and looks like this:</p> <pre><code>196 242 3 881250949 186 302 3 891717742 22 377 1 878887116 244 51 2 880606923 166 346 1 886397596 298 474 4 884182806 115 265 2 881171488 253 465 5 891628467 305 451 3 886324817 ... ... .. ......... </code></pre> <p>I imported them in Python using <code>numpy</code>, here is my script:</p> <pre><code>from numpy import loadtxt np_data = loadtxt('u.data', delimiter='\t', skiprows=0) print(np_data) </code></pre> <p>I just want to print it to see the result, but it gives me different a format:</p> <pre><code>[[ 1.96000000e+02 2.42000000e+02 3.00000000e+00 8.81250949e+08] [ 1.86000000e+02 3.02000000e+02 3.00000000e+00 8.91717742e+08] [ 2.20000000e+01 3.77000000e+02 1.00000000e+00 8.78887116e+08] ..., [ 2.76000000e+02 1.09000000e+03 1.00000000e+00 8.74795795e+08] [ 1.30000000e+01 2.25000000e+02 2.00000000e+00 8.82399156e+08] [ 1.20000000e+01 2.03000000e+02 3.00000000e+00 8.79959583e+08]] </code></pre> <p>There is point <code>.</code> in every number in <code>print(np_data)</code>. How to format them to look like my original data file?</p>
<p>I've solved this, turn out I miss the <code>dtype</code> argument , so the script should look like this:</p> <pre><code>from numpy import loadtxt np_data = loadtxt('u.data',dtype=int ,delimiter='\t', skiprows=0) print(np_data) </code></pre> <p>and done</p>
python|python-2.7|numpy
1
109
34,932,739
Python: numpy shape confusion
<p>I have a numpy array:</p> <pre><code>&gt;&gt;&gt; type(myArray1) Out[14]: numpy.ndarray &gt;&gt;&gt; myArray1.shape Out[13]: (500,) </code></pre> <p>I have another array:</p> <pre><code>&gt;&gt;&gt; type(myArray2) Out[14]: numpy.ndarray &gt;&gt;&gt; myArray2.shape Out[13]: (500,1) </code></pre> <p>( 1 ) What is the difference between (500,) and (500,1) ?</p> <p>( 2 ) How do I change (500,) to (500,1)</p>
<p>(1) The difference between (500,) and (500,1) is that the first is the shape of a one-dimensional array, while the second is the shape of a 2-dimensional array whose 2nd dimension has length 1. This may be confusing at first since other languages don't make that distinction.</p> <p>(2) You can use np.reshape to do that: <code>myArray1.reshape(-1,1)</code>. You can also add a dimension to your array using np.expand_dims: <code>np.expand_dims(myArray1, axis = 1)</code>.</p>
python|numpy
6
110
35,097,837
Capture video data from screen in Python
<p>Is there a way with Python (maybe with OpenCV or PIL) to continuously grab frames of all or a portion of the screen, at least at 15 fps or more? I've seen it done in other languages, so in theory it should be possible. </p> <p>I do not need to save the image data to a file. I actually just want it to output an array containing the raw RGB data (like in a numpy array or something) since I'm going to just take it and send it to a large LED display (probably after re-sizing it).</p>
<p>With all of the above solutions, I was unable to get a usable frame rate until I modified my code in the following way:</p> <pre><code>import numpy as np import cv2 from mss import mss from PIL import Image bounding_box = {'top': 100, 'left': 0, 'width': 400, 'height': 300} sct = mss() while True: sct_img = sct.grab(bounding_box) cv2.imshow('screen', np.array(sct_img)) if (cv2.waitKey(1) &amp; 0xFF) == ord('q'): cv2.destroyAllWindows() break </code></pre> <p>With this solution, I easily get 20+ frames/second.</p> <p>For reference, check this link: <a href="https://python-mss.readthedocs.io/examples.html#opencv-numpy" rel="noreferrer">OpenCV/Numpy example with mss</a></p>
python|opencv|numpy|screenshot
32
111
31,072,305
Replace a value in MultiIndex (pandas)
<p>In the following DataFrame: How can I replace <code>["x2", "Total"]</code> with <code>["x2", "x2"]</code> leaving <code>x1</code> as is?</p> <pre><code>l1 900 902 912 913 916 l2 ИП ПС ИП ПС ИП ПС ИП ПС ИП ПС i1 i2 x1 Total 10 6 3 3 10 16 2 9 3 8 x2 Total 1 0 0 0 0 0 0 0 0 0 </code></pre> <p><code>.rename</code> will replace all <code>"Total"</code> values, not just the one I need.</p>
<p>Assuming your dataframe is called df the following code will perform your desired substitution by replacing the existing index with a modified index.</p> <pre><code>index = df.index names = index.names index = df.index.tolist()[:1]+[('x2','x2')] df.index = pd.MultiIndex.from_tuples(index, names = names) </code></pre> <p>Or you can directly modify the inner level of the index:</p> <pre><code>df.index.set_levels([u'Total', u'x2'],level=1,inplace=True) df.index.set_labels([0, 1],level=1,inplace=True) </code></pre> <p>You can also use <code>level='i2'</code> in place of <code>level=1</code></p>
python|pandas|dataframe|multi-index
8
112
67,217,354
How to access Artifacts for a Model Endpoint on Unified AI Platform when using Custom Containers for Prediction?
<p>Because of certain VPC restrictions I am forced to use custom containers for predictions for a model trained on Tensorflow. According to the documentation requirements I have created a HTTP server using Tensorflow Serving. The Dockerfile used to build the image is as follows:</p> <pre><code>FROM tensorflow/serving:2.3.0-gpu # Set where models should be stored in the container ENV MODEL_BASE_PATH=/models RUN mkdir -p ${MODEL_BASE_PATH} # copy the model file ENV MODEL_NAME=my_model COPY my_model /models/my_model EXPOSE 5000 EXPOSE 8080 CMD [&quot;tensorflow_model_server&quot;, &quot;--rest_api_port=8080&quot;, &quot;--port=5000&quot;, &quot;--model_name=my_model&quot;, &quot;--model_base_path=/models/my_model&quot;] </code></pre> <p>Where <code>my_model</code> contains the <code>saved_model</code> inside a folder named <code>1/</code>. I have then pushed the container image to Google Container Registry.</p> <p>I would now like to pass a <code>Model Artifact</code> to this custom container such that I don't have to <code>build</code> and <code>push</code> a new docker image every time I train a new model. However I am unable to figure out how to access this new model (which is saved on a Cloud Storage Bucket) from within my <code>Dockerfile</code> while creating a <code>Model</code> on Unified AI Platform.</p> <p>According to the documentation mentioned <a href="https://cloud.google.com/ai-platform-unified/docs/predictions/custom-container-requirements#artifacts" rel="nofollow noreferrer">here</a> the way to do so is as follows:</p> <blockquote> <p>However, if you do provide model artifacts by specifying the <code>artifactUri</code> field, then the container must load these artifacts when it starts running. When AI Platform starts your container, it sets the <code>AIP_STORAGE_URI</code> environment variable to a Cloud Storage URI that begins with <code>gs://</code>. Your container's entrypoint command can download the directory specified by this URI in order to access the model artifacts.</p> </blockquote> <p>However how do I rewrite the <code>ENTRYPOINT</code> to my Docker Image such that it reads the <code>AIP_STORAGE_URI</code> variable?</p> <p>The link to the base image <code>tensorflow/serving:2.3.0-gpu</code> is <a href="https://github.com/tensorflow/serving/blob/master/tensorflow_serving/tools/docker/Dockerfile.gpu" rel="nofollow noreferrer">here</a>.</p> <p>Any help will be appreciated.</p>
<p>In the <a href="https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#artifacts" rel="nofollow noreferrer">documentation</a>, it is explained that Vertex AI creates and manages a copy of the model artifacts that are passed when creating a model. The URI of the model artifact bucket managed by Vertex AI is stored in the <code>AIP_STORAGE_URI</code> environment variable. To access the model artifacts from this URI, the <code>ENTRYPOINT</code> command can be written as follows</p> <pre><code>ENTRYPOINT tensorflow_model_server --rest_api_port=8080 --port=5000 --model_name=$MODEL_NAME --model_base_path=$AIP_STORAGE_URI </code></pre> <p>The shell form of the <code>ENTRYPOINT</code> command is used here because there is a need to access the <code>AIP_STORAGE_URI</code> environment variable. More information about the usage of the <code>ENTRYPOINT</code> command can be found <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer">here</a>.</p>
docker|tensorflow|google-cloud-platform|google-cloud-ml
1
113
67,221,457
How come I get "ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float)." with text data?
<p>Using tensorflow, I am trying to convert a dataframe to a tds so that I can do some NLP work with it. It is all text data.</p> <pre><code>&gt;&gt;&gt; df.dtypes title object headline object byline object dateline object text object copyright category country category industry category topic category file object dtype: object </code></pre> <p>This is labeled data, where <code>df.topics, df.country, df.industry</code> are labels for <code>df.text</code>. I am trying to build a model to predict the topic of the text, given this labeled dataset and using BERT. Before I get to to that, however, I am converting <code>df</code> into a tds.</p> <pre><code>import pandas as pd import numpy as np import tensorflow as tf from transformers import AutoTokenizer import tensorflow_hub as hub import tensorflow_text as text from sklearn.model_selection import train_test_split # Load tokenizer and logger tf.get_logger().setLevel('ERROR') tokenizer = AutoTokenizer.from_pretrained('roberta-large') # Load dataframe with just text and topic columns df = pd.read_csv('test_dataset.csv', sep='|', dtype={'topic': 'category', 'country': 'category', 'industry': 'category', 'copyright': 'category'}) # Split dataset into train, test, val (70, 15, 15) train, test = train_test_split(df, test_size=0.15) train, val = train_test_split(train, test_size=0.15) # Convert df to tds train_ds = tf.data.Dataset.from_tensor_slices(dict(train)) val_ds = tf.data.Dataset.from_tensor_slices(dict(val)) test_ds = tf.data.Dataset.from_tensor_slices(dict(test)) for feature_batch, label_batch in train_ds.take(1): print('Every feature:', list(feature_batch.keys())) print('A batch of topics:', feature_batch['topic']) print('A batch of targets:', label_batch ) </code></pre> <p>I am getting <code>ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float).</code> on <code>line 23, in &lt;module&gt; train_ds = tf.data.Dataset.from_tensor_slices(dict(train))</code>.</p> <p>How can this be the case when this is all text data? How do I fix this?</p> <p>I have looked at <a href="https://stackoverflow.com/questions/58636087/tensorflow-valueerror-failed-to-convert-a-numpy-array-to-a-tensor-unsupporte">Tensorflow - ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float)</a> but it does not solve my problem.</p>
<p>If you are using <code>tf.data.Dataset.from_tensor_slices</code> you have to first convert your data into a numpy array. Also, since this is using text data, you also need to tokenize your data.</p> <pre><code># Create new index train_idx = [i for i in range(len(train.index))] test_idx = [i for i in range(len(test.index))] val_idx = [i for i in range(len(val.index))] # Convert to numpy x_train = train['text'].values[train_idx] x_test = test['text'].values[test_idx] x_val = val['text'].values[val_idx] y_train = train['topic_encoded'].values[train_idx] y_test = test['topic_encoded'].values[test_idx] y_val = val['topic_encoded'].values[val_idx] # Tokenize datasets tr_tok = tokenizer(list(x_train), return_tensors='tf', truncation=True, padding=True, max_length=128) val_tok = tokenizer(list(x_val), return_tensors='tf', truncation=True, padding=True, max_length=128) test_tok = tokenizer(list(x_test), return_tensors='tf', truncation=True, padding=True, max_length=128) # Convert dfs to tds train_ds = tf.data.Dataset.from_tensor_slices((dict(tr_tok), y_train)) val_ds = tf.data.Dataset.from_tensor_slices((dict(val_tok), y_val)) test_ds = tf.data.Dataset.from_tensor_slices((dict(test_tok), y_test)) </code></pre> <p>That should solve the problem.</p>
python|pandas|numpy|tensorflow
0
114
67,442,037
Subset permutation of one column based on the category of another column in pandas
<p>Here is one simplified case:</p> <pre><code>df = pd.DataFrame({'col1': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'col2': ['a', 'b', 'c', 'A', 'C', 'B', 'red', 'blue', 'greed']}) </code></pre> <p>I want to do subset permutation on col2 with reference to col1. For example, only permute 'a', 'b', 'c' in col2 for they belong to category 1 in col1. And permute 'A', 'C', 'B' in category 2, then those colors in category 3. The output looks like below:</p> <pre><code>col1 col2 0 1 b 1 1 c 2 1 a 3 2 A 4 2 B 5 2 C 6 3 blue 7 3 red 8 3 green </code></pre> <p>For there are thousands of categories in col1, I'm thinking if there is a simple way instead of doing it in a loop one by one. Thank you.</p>
<pre><code>df['col2'] = df.groupby('col1', as_index=False).col2.transform(np.random.permutation) df </code></pre> <p><strong>Output</strong></p> <pre><code> col1 col2 0 1 c 1 1 b 2 1 a 3 2 B 4 2 A 5 2 C 6 3 red 7 3 greed 8 3 blue </code></pre>
pandas|subset|permutation
1
115
67,282,736
How to skip rows with wrong data types of an dataset using python
<p>Have been working on the dataset cleaning and processing the data for further analysis, I have used different cleaning scripts.</p> <p>My script gets aborted whenever there is any unwanted / unexceptional data comes up in between the dataset columns, The script execution gets stuck and rest of the data doesn't gets processed.</p> <p><strong>Script i have tried using :</strong></p> <pre><code>import pandas as pd import numpy as np pd.options.mode.chained_assignment = None df = pd.read_excel(open(r'data.xlsx', 'rb'), sheet_name='sheet1') </code></pre> <p><strong>What I have been expecting :</strong></p> <p>How can i process the whole dataset even if there is any exceptional/Unknown datatypes comes up in between the data by skipping and leaving the wrong datatypes as it is.</p> <p>Any Exception handling method i can use into this.</p> <p>Please suggest.</p>
<p>I don't think I quite understand the problem.</p> <p>I have always just done it this way never had problems..</p> <pre><code>import pandas as pd FileLocation = (r'Test.xlsx') df = pd.read_excel(FileLocation, sheet_name='sheet1') print(df.head) </code></pre> <p>and then you can use a for each loop to iterate over your data frame if you want to go and remove wrong data.</p> <p>But if you have problems with the excel reader reading it as float but want it as string you can do this:</p> <pre><code>import pandas as pd FileLocation = (r'Test.xlsx') df = pd.read_excel(FileLocation, sheet_name='sheet1', converters={'COLUMN-NAME':str}) print(df.head) </code></pre> <p>Then you would get the wanted column as a string or whatever you want.</p>
python|pandas|dataframe
0
116
60,189,328
Arbitrary number of different groupby levels in one go
<p>Is there a way to compute arbitrary number of different groupby levels in one go with some pre-built Pandas function? Below is a simple example with two columns.</p> <pre><code>import pandas as pd df1 = pd.DataFrame( { "name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"], "city" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle", "Portland"], "dollars":[1, 1, 1, 1, 1, 1] }) group1 = df1.groupby("city").dollars.sum().reset_index() group1['name']='All' group2 = df1.groupby("name").dollars.sum().reset_index() group2['city']='All' group3 = df1.groupby(["name", "city"]).dollars.sum().reset_index() total = df1.dollars.sum() total_df=pd.DataFrame({ "name" : ["All"], "city" : ["All"], "dollars": [total] }) all_groups = group3.append([group1, group2, total_df], sort=False) name city dollars 0 Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 3 Mallory Seattle 1 0 All Portland 2 1 All Seattle 4 0 Alice All 1 1 Bob All 2 2 Mallory All 3 0 All All 6 </code></pre> <p>So I took Ben. T example and rebuilt it from sum() to agg(). The next step for me is to build an option to pass a specific list of groupby combinations, in case not all of them are needed.</p> <pre><code>from itertools import combinations import pandas as pd df1 = pd.DataFrame( { "name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"], "city" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle", "Portland"], "dollars":[1, 2, 6, 5, 3, 4], "qty":[2, 3, 4, 1, 5, 6] , "id":[1, 1, 2, 2, 3, 3] }) col_gr = ['name', 'city'] agg_func={'dollars': ['sum', 'max', 'count'], 'qty': ['sum'], "id":['nunique']} def multi_groupby(in_df, col_gr, agg_func, all_value="ALL"): tmp1 = pd.DataFrame({**{col: all_value for col in col_gr}}, index=[0]) tmp2 = in_df.agg(agg_func)\ .unstack()\ .to_frame()\ .transpose()\ .dropna(axis=1) tmp2.columns = ['_'.join(col).strip() for col in tmp2.columns.values] total = tmp1.join(tmp2) for r in range(len(col_gr), 0, -1): for cols in combinations(col_gr, r): tmp_grp = in_df.groupby(by=list(cols))\ .agg(agg_func)\ .reset_index()\ .assign(**{col: all_value for col in col_gr if col not in cols}) tmp_grp.columns = ['_'.join(col).rstrip('_') for col in tmp_grp.columns.values] total = pd.concat([total]+[tmp_grp], axis=0, ignore_index=True) return total multi_groupby(df1, col_gr, agg_func) </code></pre>
<p>Assuming you look for a general way to create all the combinations in the <code>groupby</code>, you can use <a href="https://docs.python.org/3.8/library/itertools.html#itertools.combinations" rel="nofollow noreferrer">itertools.combinations</a>:</p> <pre><code>from itertools import combinations col_gr = ['name', 'city'] col_sum = ['dollars'] all_groups = pd.concat( [ df1.groupby(by=list(cols))[col_sum].sum().reset_index()\ .assign(**{col:'all' for col in col_gr if col not in cols}) for r in range(len(col_gr), 0, -1) for cols in combinations(col_gr, r) ] + [ pd.DataFrame({**{col:'all' for col in col_gr}, **{col: df1[col].sum() for col in col_sum},}, index=[0])], axis=0, ignore_index=True) print (all_groups) name city dollars 0 Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 3 Mallory Seattle 1 4 Alice all 1 5 Bob all 2 6 Mallory all 3 7 all Portland 2 8 all Seattle 4 9 all all 6 </code></pre>
python|pandas|pandas-groupby
1
117
60,197,294
Error when using pandas dataframe in R cell, in rpy2, Jupyter Notebook
<p>I want to use <code>ggplot2</code> within <code>Jupyter Notebook</code>. However, when I try to make an R magic cell and introduce a variable, I get an error.</p> <p>Here is the code (one paragraph indicates one cell):</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import rpy2 %matplotlib inline from rpy2.robjects import pandas2ri pandas2ri.activate() %load_ext rpy2.ipython %%R library(ggplot2) data = pd.read_csv('train_titanic.csv') %%R -i data -w 900 -h 480 -u px </code></pre> <p>With this last cell, I get the following error (incl traceback):</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/pandas2ri.py in py2rpy_pandasdataframe(obj) 54 try: ---&gt; 55 od[name] = conversion.py2rpy(values) 56 except Exception as e: ~/anaconda3/envs/catenv/lib/python3.7/functools.py in wrapper(*args, **kw) 839 --&gt; 840 return dispatch(args[0].__class__)(*args, **kw) 841 ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/pandas2ri.py in py2rpy_pandasseries(obj) 125 if type(x) is not homogeneous_type: --&gt; 126 raise ValueError('Series can only be of one type, or None.') 127 # TODO: Could this be merged with obj.type.name == 'O' case above ? ValueError: Series can only be of one type, or None. During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in from_object(cls, obj) 367 try: --&gt; 368 mv = memoryview(obj) 369 res = cls.from_memoryview(mv) TypeError: memoryview: a bytes-like object is required, not 'Series' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) &lt;ipython-input-14-75e210679e4a&gt; in &lt;module&gt; ----&gt; 1 get_ipython().run_cell_magic('R', '-i data -w 900 -h 480 -u px', '\n\n') ~/anaconda3/envs/catenv/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell) 2360 with self.builtin_trap: 2361 args = (magic_arg_s, cell) -&gt; 2362 result = fn(*args, **kwargs) 2363 return result 2364 &lt;/home/morgan/anaconda3/envs/catenv/lib/python3.7/site-packages/decorator.py:decorator-gen-130&gt; in R(self, line, cell, local_ns) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/IPython/core/magic.py in &lt;lambda&gt;(f, *a, **k) 185 # but it's overkill for just that one bit of state. 186 def magic_deco(arg): --&gt; 187 call = lambda f, *a, **k: f(*a, **k) 188 189 if callable(arg): ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/ipython/rmagic.py in R(self, line, cell, local_ns) 721 raise NameError("name '%s' is not defined" % input) 722 with localconverter(converter) as cv: --&gt; 723 ro.r.assign(input, val) 724 725 tmpd = self.setup_graphics(args) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/functions.py in __call__(self, *args, **kwargs) 190 kwargs[r_k] = v 191 return (super(SignatureTranslatedFunction, self) --&gt; 192 .__call__(*args, **kwargs)) 193 194 ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/functions.py in __call__(self, *args, **kwargs) 111 112 def __call__(self, *args, **kwargs): --&gt; 113 new_args = [conversion.py2rpy(a) for a in args] 114 new_kwargs = {} 115 for k, v in kwargs.items(): ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/functions.py in &lt;listcomp&gt;(.0) 111 112 def __call__(self, *args, **kwargs): --&gt; 113 new_args = [conversion.py2rpy(a) for a in args] 114 new_kwargs = {} 115 for k, v in kwargs.items(): ~/anaconda3/envs/catenv/lib/python3.7/functools.py in wrapper(*args, **kw) 838 '1 positional argument') 839 --&gt; 840 return dispatch(args[0].__class__)(*args, **kw) 841 842 funcname = getattr(func, '__name__', 'singledispatch function') ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/pandas2ri.py in py2rpy_pandasdataframe(obj) 59 'The error is: %s' 60 % (name, str(e))) ---&gt; 61 od[name] = StrVector(values) 62 63 return DataFrame(od) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/robjects/vectors.py in __init__(self, obj) 382 383 def __init__(self, obj): --&gt; 384 super().__init__(obj) 385 self._add_rops() 386 ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in __init__(self, obj) 286 super().__init__(obj) 287 elif isinstance(obj, collections.abc.Sized): --&gt; 288 super().__init__(type(self).from_object(obj).__sexp__) 289 else: 290 raise TypeError('The constructor must be called ' ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in from_object(cls, obj) 370 except (TypeError, ValueError): 371 try: --&gt; 372 res = cls.from_iterable(obj) 373 except ValueError: 374 msg = ('The class methods from_memoryview() and ' ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/conversion.py in _(*args, **kwargs) 26 def _cdata_res_to_rinterface(function): 27 def _(*args, **kwargs): ---&gt; 28 cdata = function(*args, **kwargs) 29 # TODO: test cdata is of the expected CType 30 return _cdata_to_rinterface(cdata) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in from_iterable(cls, iterable, populate_func) 317 if populate_func is None: 318 cls._populate_r_vector(iterable, --&gt; 319 r_vector) 320 else: 321 populate_func(iterable, r_vector) ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in _populate_r_vector(cls, iterable, r_vector) 300 r_vector, 301 cls._R_SET_VECTOR_ELT, --&gt; 302 cls._CAST_IN) 303 304 @classmethod ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in _populate_r_vector(iterable, r_vector, set_elt, cast_value) 237 def _populate_r_vector(iterable, r_vector, set_elt, cast_value): 238 for i, v in enumerate(iterable): --&gt; 239 set_elt(r_vector, i, cast_value(v)) 240 241 ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/sexp.py in _as_charsxp_cdata(x) 430 return x.__sexp__._cdata 431 else: --&gt; 432 return conversion._str_to_charsxp(x) 433 434 ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/conversion.py in _str_to_charsxp(val) 118 s = rlib.R_NaString 119 else: --&gt; 120 cchar = _str_to_cchar(val) 121 s = rlib.Rf_mkCharCE(cchar, _CE_UTF8) 122 return s ~/anaconda3/envs/catenv/lib/python3.7/site-packages/rpy2/rinterface_lib/conversion.py in _str_to_cchar(s, encoding) 97 def _str_to_cchar(s, encoding: str = 'utf-8'): 98 # TODO: use isStrinb and installTrChar ---&gt; 99 b = s.encode(encoding) 100 return ffi.new('char[]', b) 101 AttributeError: 'float' object has no attribute 'encode' </code></pre> <p>So I find that it is not possible to even start an R magic cell while importing my pandas dataframe object. However, I have tried creating R vectors inside the cell, and find I can plot these using <code>ggplot2</code> with no issues.</p> <p>I am using <code>Python 3.7.6</code>, <code>rpy2 3.1.0</code>, <code>jupyter-notebook 6.0.3</code>and am using <code>Ubuntu 18.04.2 LTS</code> on Windows Subsystem for Linux.</p>
<p>The problem is most likely with one (or more) columns having more than one type - therefore it is impossible to transfer the data into an R vector (which can hold only one data type). The traceback may be overwhelming, but here is the relevant part:</p> <pre><code>ValueError: Series can only be of one type, or None. </code></pre> <p>Which column it is? Difficult to say without looking at the dataset that you load, but my general solution is to check the types in the columns:</p> <pre><code>types = data.applymap(type).apply(set) types[types.apply(len) &gt; 1] </code></pre> <p>Anything returned by the snippet above would be a candidate culprit. There are many different ways of dealing with the problem, depending on the exact nature of the data. Workarounds that I frequently use include:</p> <ul> <li>calling <code>data = data.infer_objects()</code> - helps if the pandas did not catch up with a dtype change and still stores the data with (suboptimal) Python objects</li> <li>filling <code>NaN</code> with an empty string or a string constant if you have missing values in a string column (e.g. <code>str_columns = str_columns.fillna('')</code>)</li> <li><code>dates.apply(pd.to_datetime, axis=1)</code> if you have <code>datetime</code> objects but the dtype is object</li> <li>using <code>df.applymap(lambda x: datetime.combine(x, datetime.min.time()) if not isinstance(x, datetime) else x)</code> if you have a mixture of <code>date</code> and <code>datetime</code> objects</li> </ul> <p>In some vary rare cases pandas stores the data differently than expected by rpy2 (following certain manipulations); then writing the dataframe down to a csv file and reading it from the disk again helps - but this is likely not what you are facing here, as you start from a newly read dataframe. </p>
python|r|pandas|jupyter-notebook|rpy2
4
118
60,308,484
How do I find the mean of summed values across multiple Dataframes?
<p>Edit: I've realized that I did not ask my question in the right way. I'm not going to accept one answer over another, but am going to leave all content here for anyone's future use.</p> <p>I have a value that I'm looking to compute across on-going DataFrames.</p> <p>I have df1:</p> <pre><code>Name | Col1 | Col2 ---------------------------- 'Silvers'| 7 | 1 'Jones' | 7 | 2 'Jackson'| 4 | NaN 'Merole' | NaN | 2 'Kanoff' | NaN | 5 'Walker' | NaN | 8 'Smith' | 8 | 0 </code></pre> <p>I'd like to sum the <code>Col1</code> and <code>Col2</code> columns that results in a new column, <code>Col3</code>. I already have a solution that sums correctly if there is a value present in both columns, returns the non-null value if there is a NaN value present, and returns NaN if both values are NaN. So that resulting DataFrame, <code>df2</code> would look like this:</p> <pre><code>Name | Col1 | Col2 | Col_Sum ------------------------------------- 'Silvers'| 7 | 1 | 8 'Jones' | 7 | 2 | 9 'Jackson'| 4 | NaN | 4 'Merole' | NaN | 2 | 2 'Kanoff' | NaN | NaN | NaN 'Walker' | NaN | 8 | 8 'Smith' | NaN | NaN | NaN </code></pre> <p>For <code>df3</code>, when I have new data, I'd like to perform the same sum operations as I did above, but then find the average of <em>only</em> the non-null values that were included in the summation in prior DataFrames.</p> <p>I want this for <code>df3</code>:</p> <pre><code>Name | Col1 | Col2 | Col_Sum | Cols_Avg ------------------------------------------------------- 'Silvers'| 3 | 3 | 6 | 3.5 'Jones' | 1 | 6 | 7 | 4 'Jackson'| NaN | 9 | 9 | 6.5 'Merole' | NaN | NaN | NaN | 2 'Kanoff' | NaN | 7 | 7 | 7 'Walker' | 4 | 8 | 12 | 6.67 'Smith' | NaN | NaN | NaN | NaN </code></pre> <p>I'd then like to continue that trend with each new DataFrame of data that I get, summing the values and computing their averages based on how many values there are for the same row across all the Dataframes. I'm not sure how to accomplish this or if I'm even using the correct tools to do so. Any help is much appreciated. Thanks! </p>
<p>Probably you can use <code>pd.merge</code>:</p> <pre><code>df2 = pd.DataFrame({'Name':['Silvers', 'Jones', 'Jackson', 'Merole'], 'Col1':[7,7,4,np.nan], 'Col2':[1,2,np.nan,2]}) df3 = pd.DataFrame({'Name':['Silvers', 'Jones', 'Jackson', 'Merole'], 'Col1':[3,1,np.nan,np.nan], 'Col2':[3,6,9,np.nan]}) dfn = pd.merge(df2, df3, on='Name') dfn['ColsAvg'] = dfn.mean(axis=1) </code></pre> <p>For ease I put everything in a new DataFrame <code>dfn</code>. If you want to put <code>ColsAvg</code> in df3 as you write, sure this is possible.</p>
python|pandas|dataframe
0
119
65,346,131
What's the best way of converting a numeric array in a text file to a numpy array?
<p>So I'm trying to create an array from a text file, the text file is laid out as follows. The numbers in the first two columns both go to 165:</p> <pre><code>0 0 1.0 0.0 1 0 0.0 0.0 1 1 0.0 0.0 2 0 -9.0933087157900000E-5 0.0000000000000000E+00 2 1 -2.7220323615900000E-09 -7.5751829208300000E-10 2 2 3.4709851601400000E-5 1.6729490538300000E-08 3 0 -3.2035914003000000E-06 0.0000000000000000E+00 3 1 2.6327440121800000E-05 5.4643630898200000E-06 3 2 1.4188179329400000E-05 4.8920365004800000E-06 3 3 1.2286058944700000E-05 -1.7854480816400000E-06 4 0 3.1973095717200000E-06 0.0000000000000000E+00 4 1 -5.9966018301500000E-06 1.6619345194700000E-06 4 2 -7.0818069269700000E-06 -6.7836271726900000E-06 4 3 -1.3622983381300000E-06 -1.3443472287100000E-05 4 4 -6.0257787358300000E-06 3.9396371953800000E-06 </code></pre> <p>I'm trying to write a function where an array is made using the numbers in the 3rd columns, taking their positions in the array from the first two columns, and the empty cells are 0s. For example:</p> <pre><code>1 0 0 0 0 0 0 0 -9.09330871579000e-05 -2.72203236159000e-09 3.47098516014000e-05 0 -3.20359140030000e-06 2.63274401218000e-05 1.41881793294000e-05 1.22860589447000e-05 </code></pre> <p>At the same time, I'm also trying to make a second array but using the numbers from the 4th column not the 3rd. The code that I've written so far is as follows and this is the array produced, I'm not even sure where the 4.41278e-08 comes from:</p> <pre><code>import numpy as np def createarray(filepath,maxdegree): Cnm = np.zeros((maxdegree+1,maxdegree+1)) Snm = np.zeros((maxdegree+1,maxdegree+1)) fid = np.genfromtxt(filepath) for row in fid: for n in range(0,maxdegree): for m in range(0,maxdegree): Cnm[n+1,m+1]=row[2] Snm[n+1,m+1]=row[3] return [Cnm, Snm] 0 0 0 0 0 4.41278e-08 4.41278e-08 4.41278e-08 0 4.41278e-08 4.41278e-08 4.41278e-08 0 4.41278e-08 4.41278e-08 4.41278e-08 </code></pre> <p>I'm not getting any errors but I'm also not getting the right array. Can anyone shed some light on what I'm doing wrong?</p>
<p>Your data appear to be in a COO sparse matrix format already. This means, that you could use your own function, but you could also capitalize on the work done in the <code>scipy.sparse</code> package.</p> <p>For example this code creates a function that would generate one of your matrices at a time. You could modify it to return both matrices.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy import sparse def createarray(filepath, maxdegree, value_column): &quot;&quot;&quot;Create single array from file&quot;&quot;&quot; # load sparse data into numpy array data = np.loadtxt(filepath) # use coo_matrix to create the sparse matrix where the # values are found in the value_column column of data M = sparse.coo_matrix((data[:,value_column], (data[:,0], data[:,1])), shape=(maxdegree+1, maxdegree+1)) # if you need a numpy array call toarray() otherwise you # can return M which is sparse and more memory efficient return M.toarray() </code></pre> <p>Then for the first matrix you wanted to create you would set <code>value_column</code> to 2, and for the second you would set <code>value_column</code> to 3.</p> <pre class="lang-py prettyprint-override"><code># first matrix Cnm = createarray(filepath, maxdegree, 2) # second matrix Snm = createarray(filepath, maxdegree, 3) </code></pre>
python-3.x|numpy
0
120
65,426,278
to_sql() method of pandas sends primary key column as NULL even if the column is not present in dataframe
<p>I want to insert a data frame into the <em><strong>Snowflake</strong></em> database table. The database has columns like <code>id</code> which is a <code>primary_key</code> and <code>event_id</code> which is an <code>integer</code> field and it's also <code>nullable</code>.</p> <p>I have created a <code>declarative_base()</code> class using <em><strong>SQLAlchemy</strong></em> as shown below -</p> <pre><code>class AccountUsageLoginHistory(Base): __tablename__ = constants.TABLE_ACCOUNT_USAGE_LOGIN_HISTORY __table_args__ = { 'extend_existing':True, 'schema' : os.environ.get('SCHEMA_NAME_AUDITS') } id = Column(Integer, Sequence('id_account_usage_login_history'), primary_key=True) event_id = Column(Integer, nullable=True) </code></pre> <p>The class stated above creates a table in the <em><strong>Snowflake</strong></em> database.</p> <p>I have a data frame that has just one column <code>event_id</code>.</p> <p>When I try to insert the data using pandas <code>to_sql()</code> method Snowflake returns me an error shown below -</p> <pre><code>snowflake.connector.errors.ProgrammingError: 100072 (22000): 01991f2c-0be5-c903-0000-d5e5000c6cee: NULL result in a non-nullable column </code></pre> <p>This error is generated by snowflake because <code>to_sql()</code> is appending a column <code>id</code> and the values are set to <code>null</code> for each row of that column.</p> <pre><code>dataframe.to_sql(table_name, self.engine, index=False, method=pd_writer, if_exists=&quot;append&quot;) </code></pre> <p><em><strong>Consider this as case 1 -</strong></em></p> <p>I tried to run an insert query directly to snowflake -</p> <pre><code>insert into &quot;SFOPT_TEST&quot;.&quot;AUDITS&quot;.&quot;ACCOUNT_USAGE_LOGIN_HISTORY&quot; (ID, EVENT_ID) values(NULL, 33) </code></pre> <p>The query above returned me the same error -</p> <pre><code>NULL result in a non-nullable column </code></pre> <p>The query stated above is how probably the <code>to_sql()</code> method might be doing.</p> <p><em><strong>Consider this as case 2 -</strong></em></p> <p>I also tried to insert a row by executing the query stated below -</p> <pre><code>insert into &quot;SFOPT_TEST&quot;.&quot;AUDITS&quot;.&quot;ACCOUNT_USAGE_LOGIN_HISTORY&quot; (EVENT_ID) values(33) </code></pre> <p>Now, this particular query has been executed successfully by inserting the data into the table and it has also auto-generated value for column <code>id</code>.</p> <p>How can I make <code>to_sql()</code> method of pandas to use <em><strong>case 2</strong></em>?</p>
<p>Please note that <code>pandas.DataFrame.to_sql()</code> has by default parameter <code>index=True</code> which means that it will add an extra column (df.index) when inserting the data.</p> <p>Some Databases like PostgreSQL have a data type <code>serial</code> which allows you to sequentially fill the column with incremental numbers.</p> <p>Snowflake DB doesn't have that concept but instead, there are other ways to handle it:</p> <p><strong>First Option:</strong> You can use <code>CREATE SEQUENCE</code> statement and create a sequence directly in the db - <a href="https://docs.snowflake.com/en/sql-reference/sql/create-sequence.html" rel="nofollow noreferrer">here</a> is the official documentation on this topic. The downside of this approach is that you would need to convert your DataFrame into a proper SQL statement:</p> <p>db preparation part:</p> <pre><code>CREATE OR REPLACE SEQUENCE schema.my_sequence START = 1 INCREMENT = 1; CREATE OR REPLACE TABLE schema.my_table (i bigint, b text); </code></pre> <p>You would need to convert the DataFrame into Snowflake's <code>INSERT</code> statement and use <code>schema.my_sequence.nextval</code> to get the next ID value</p> <pre><code>INSERT INTO schema.my_table VALUES (schema.my_sequence.nextval, 'string_1'), (schema.my_sequence.nextval, 'string_2'); </code></pre> <p>The result will be:</p> <pre><code>i b 1 string_1 2 string_2 </code></pre> <p>Please note that there are some <a href="https://docs.snowflake.com/en/user-guide/querying-sequences.html" rel="nofollow noreferrer">limitations</a> to this approach and you need to ensure that each insert statement you do this way will be successful as calling <code>schema.my_sequence.nextval</code> and not inserting it will mean that there will be gaps numbers. To avoid it you can have a separate script that checks if the current insert was successful and if not it will recreate the sequence by calling:</p> <pre><code>REPLACE SEQUENCE schema.my_sequence start = (SELECT max(i) FROM schema.my_table) increment = 1; </code></pre> <p><strong>Alternative Option:</strong> You would need to create an extra function that runs the SQL to get the last i you inserted previously.</p> <pre><code>SELECT max(i) AS max_i FROM schema.my_table; </code></pre> <p>and then update the <code>index</code> in your DataFrame before running <code>to_sql()</code></p> <pre><code>df.index = range(max_i+1, len(df)+max_i+1) </code></pre> <p>This will ensure that your DataFrame index continues i in your table. Once that is done you can use</p> <pre><code>df.to_sql(index_label='i', name='my_table', con=connection_object) </code></pre> <p>It will use your index as one of the columns you insert allowing you to maintain the unique index in the table.</p>
python|pandas|snowflake-cloud-data-platform
1
121
50,216,866
simplify a python numpy complex expression to real and imaginary parts
<p>The expression Exp(it) – Exp(6it)/2 + i Exp(-14it)/3 , for t going to 2*pi is for plotting a Mystery curve as explained in <a href="http://www.johndcook.com/blog/2015/06/03/mystery-curve/" rel="nofollow noreferrer">http://www.johndcook.com/blog/2015/06/03/mystery-curve/</a> there is a listing of python numpy to plot this curve. I want to plot this formula using a procedural language like any basic language. So I have provided this formula to Wolfram Alpha like this:<br> <strong>simplify Exp(it) – Exp(6it)/2 + i Exp(-14it)/3</strong><br> and they output the result as: </p> <p>1/3 sin(14 t)+cos(t)-1/2 cos(6 t)+ i (sin(t)-1/2 sin(6 t)+1/3 cos(14 t)) </p> <p>so in a basic language I have used this simplification like this:<br> x = Cos(t) - Cos(k* t)/2 + Sin(14* t)/3<br> y = Cos(14* t)/3 + Sin(t)- Sin(k* t)/2</p> <p>The result is exactly the same as python numpy code listed in the referred to page.<br> My question, how to get the real and imag parts from numpy like we get it from wolfram alpha site?<br> So it tell us the the real part is Cos(t) - Cos(k* t)/2 + Sin(14* t)/3, and the imag part is Cos(14* t)/3 + Sin(t)- Sin(k* t)/2. or something like that.</p>
<pre><code>In [37]: def f(t): ...: return np.exp(1j*t) - np.exp(6j*t)/2 + 1j*np.exp(-14j*t)/3 In [39]: t = np.linspace(0,2*np.pi, 10) In [40]: t Out[40]: array([0. , 0.6981317 , 1.3962634 , 2.0943951 , 2.7925268 , 3.4906585 , 4.1887902 , 4.88692191, 5.58505361, 6.28318531]) In [41]: f(t) Out[41]: array([ 0.5 +0.33333333j, 0.90203773+0.76256944j, 0.63791071+0.8071432j , -1.28867513+0.69935874j, -0.36142337+0.83291557j, -1.01796187-0.71715012j, -0.71132487-1.03269207j, 0.20938564-0.2964469j , 1.13005116-1.38903119j, 0.5 +0.33333333j]) </code></pre> <p>The result of this calculation is an array with a complex dtype. That is, the elements of the array are complex numbers.</p> <p>Basically this is bacause <code>np.exp</code> function returns a complex value when given an imaginary argument:</p> <pre><code>In [44]: np.exp(1j*1) Out[44]: (0.5403023058681398+0.8414709848078965j) </code></pre> <p>It is easy to select just the <code>real</code> or <code>imag</code> parts of those complex numbers, with <code>np.real()</code> or the <code>real</code> attribute:</p> <pre><code>In [42]: f(t).real Out[42]: array([ 0.5 , 0.90203773, 0.63791071, -1.28867513, -0.36142337, -1.01796187, -0.71132487, 0.20938564, 1.13005116, 0.5 ]) In [43]: f(t).imag Out[43]: array([ 0.33333333, 0.76256944, 0.8071432 , 0.69935874, 0.83291557, -0.71715012, -1.03269207, -0.2964469 , -1.38903119, 0.33333333]) </code></pre> <p><code>Out[44]</code> can be reproduced with:</p> <pre><code>In [46]: np.cos(1) + 1j*np.sin(1) Out[46]: (0.5403023058681398+0.8414709848078965j) </code></pre> <p>The docs for <code>np.exp</code> suggest that this this expansion is being used internally, </p> <blockquote> <p>For complex arguments, x = a + ib, we can write e^x = e^a e^{ib}. The first term, e^a, is already known (it is the real argument, described above). The second term, e^{ib}, is \cos b + i \sin b, a function with magnitude 1 and a periodic phase.</p> </blockquote> <p>But <code>numpy</code> does not have any mechanism for doing the symbolic (algebraic) calculation. It works directly with complex numbers, not algebraic expressions.</p> <hr> <p>With <code>sympy</code>, a Python symbolic math package:</p> <pre><code>In [1]: import sympy In [3]: fn = sympy.sympify('exp(1j*re(x)) -exp(6j*re(x))/2 + 1j*exp(-14j*re(x))/3') ...: In [4]: fn Out[4]: -exp(6*I*re(x))/2 + exp(I*re(x)) + I*exp(-14*I*re(x))/3 In [5]: fn.as_real_imag() Out[5]: (sin(14*re(x))/3 + cos(re(x)) - cos(6*re(x))/2, sin(re(x)) - sin(6*re(x))/2 + cos(14*re(x))/3) </code></pre> <p>I had to use <code>re(x)</code> to limit the <code>x</code> variable to being real. Otherwise it would expand the expression to</p> <pre><code>exp(14*im(x))*sin(14*re(x))/3 ... </code></pre>
numpy
1
122
50,087,883
tf.while_loop with flexible row numbers per iteration
<p>I am trying to fill a 2d array in a <code>tf.while_loop</code>. The thing is the result of my computation at each iteration returns a variable number of rows. Tensorflow does not seem to allow this.</p> <p>See this minimal example that reproduce the issue:</p> <pre class="lang-py prettyprint-override"><code>indices = tf.constant([2, 5, 7, 9]) num_elems = tf.shape(indices)[0] init_array = tf.TensorArray(tf.float64, size=num_elems) initial_i = tf.constant(0, dtype='int32') def loop_body(i, ta): # Here if I choose a random rows number, it fails. n_rows = tf.random_uniform((), minval=0, maxval=10, dtype=tf.int64) # It works with a fixed row number. # n_rows = 2 anchor = tf.random_normal((n_rows, 4)) ta = ta.write(i, tf.cast(anchor, tf.float64)) return i+1, ta _, anchors= tf.while_loop(lambda i, ta: i &lt; num_elems, loop_body, [initial_i, init_array]) anchors = anchors.stack() anchors = tf.reshape(anchors, shape=(-1, 4)) anchors = tf.identity(anchors, name="anchors") with tf.Session() as sess: result = sess.run(anchors) print(result) </code></pre> <p>It returns:</p> <pre><code>[[ 0.07496446 -0.32444516 -0.47164568 1.10953283] [-0.78791034 1.87736523 0.99817699 0.45336106] [-0.65860498 -1.1703862 -0.05761402 -0.17642537] [ 0.49713874 1.01805222 0.60902107 0.85543454] [-1.38755643 -0.70669901 0.34549037 -0.85984546] [-1.32419562 0.71003789 0.34984082 -1.39001906] [ 2.26691341 -0.63561141 0.38636214 0.02521387] [-1.55348766 1.0176425 0.4889268 -0.12093868]] </code></pre> <p>I am also open to alternative solutions to fill a Tensor in a loop with a variable number of rows at each iteration.</p>
<p>Here's a nested <code>while_loop</code> solution which writes to a single <code>TensorArray</code>:</p> <pre><code>import tensorflow as tf def make_inner_loop_body(total_size, anchor): def _inner_loop_body(j, ta): return j + 1, ta.write(total_size + j, anchor[j]) return _inner_loop_body def loop_body(i, total_size, ta): n_rows = tf.random_uniform((), minval=0, maxval=10, dtype=tf.int32) n_rows = tf.Print(n_rows, [n_rows]) anchor = tf.random_normal((n_rows, 4), dtype=tf.float64) _, ta = tf.while_loop(lambda j, ta: j &lt; n_rows, make_inner_loop_body(total_size, anchor), (tf.zeros([], dtype=tf.int32), ta)) return i+1, total_size + n_rows, ta _, _, anchors= tf.while_loop(lambda i, total_size, ta: i &lt; 4, loop_body, (tf.zeros([], dtype=tf.int32), tf.zeros([], dtype=tf.int32), tf.TensorArray(tf.float64, size=0, dynamic_size=True))) anchors = anchors.stack() anchors = tf.reshape(anchors, shape=(-1, 4)) anchors = tf.identity(anchors, name="anchors") with tf.Session() as sess: result = sess.run(anchors) print("Final shape", result.shape) print(result) </code></pre> <p>This prints something like:</p> <pre><code>[5] [5] [7] [7] Final shape (24, 4) </code></pre> <p>I'm assuming there's some reason the <code>random_normal</code> needs to be processed in a <code>while_loop</code>. Otherwise as given it'd be much easier to write:</p> <pre><code>import tensorflow as tf n_rows = tf.random_uniform((4,), minval=0, maxval=10, dtype=tf.int32) anchors = tf.random_normal((tf.reduce_sum(n_rows), 4), dtype=tf.float64) with tf.Session() as sess: result = sess.run(anchors) print("Final shape", result.shape) print(result) </code></pre>
tensorflow|vector|tensor
1
123
50,065,295
Delimit array with different strings
<p>I have a text file that contains 3 columns of useful data that I would like to be able to extract in python using numpy. The file type is a *.nc and is <strong>NOT</strong> a netCDF4 filetype. It is a standard file output type for CNC machines. In my case it is sort of a CMM (coordinate measurement machine). The format goes something like this:</p> <p>X0.8523542Y0.0000000Z0.5312869</p> <p>The X,Y, and Z are the coordinate axes on the machine. My question is, can I delimit an array with multiple delimiters? In this case: "X","Y", and "Z".</p>
<p>You can use Pandas</p> <pre><code>import pandas as pd from io import StringIO #Create a mock file ncfile = StringIO("""X0.8523542Y0.0000000Z0.5312869 X0.7523542Y1.0000000Z0.5312869 X0.6523542Y2.0000000Z0.5312869 X0.5523542Y3.0000000Z0.5312869""") df = pd.read_csv(ncfile,header=None) #Use regex with split to define delimiters as X, Y, Z. df_out = df[0].str.split(r'X|Y|Z', expand=True) df_out.set_axis(['index','X','Y','Z'], axis=1, inplace=False) </code></pre> <p>Output:</p> <pre><code> index X Y Z 0 0.8523542 0.0000000 0.5312869 1 0.7523542 1.0000000 0.5312869 2 0.6523542 2.0000000 0.5312869 3 0.5523542 3.0000000 0.5312869 </code></pre>
python-3.x|numpy|csv
1
124
63,799,471
How do I not write the first column to an Excel file using Python?
<p>I use the following code to move data from one Excel file to another.</p> <pre><code>import pandas as pd inventory = pd.read_excel('Original_File.xlsx', skiprows=3) inventory.to_excel('New_File.xlsx') </code></pre> <p>How do I NOT write the content in column 1 to the new Excel file? Column 1 contains a blank column header then a row number for each line of data in the dataframe.</p>
<h2>Problem</h2> <p>By default, <code>to_excel</code> write row names (index) out.</p> <h2>Solution</h2> <p>when you call <code>to_excel</code>, you can skip the row name by setting parameter <code>index</code> as <code>False</code>:</p> <p><code>inventory.to_excel('New_File.xlsx', index=False)</code></p> <h2>Reference</h2> <p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html</a></p>
python|excel|pandas
0
125
64,157,447
Pandas: Collapse rows in a Multiindex dataframe
<p>Below is my df:</p> <pre><code>df = pd.DataFrame({'A': [1, 1, 1, 2], 'B': [2, 2, 2, 3], 'C': [3, 3, 3, 4], 'D': ['Cancer A', 'Cancer B', 'Cancer A', 'Cancer B'], 'E': ['Ecog 9', 'Ecog 1', 'Ecog 0', 'Ecog 1'], 'F': ['val 6', 'val 1', 'val 0', 'val 1'], 'measure_m': [100, 200, 500, 300]}) print(df) A B C D E F measure_m 0 1 2 3 Cancer A Ecog 9 val 6 100 1 1 2 3 Cancer B Ecog 1 val 1 200 2 1 2 3 Cancer A Ecog 0 val 0 500 3 2 3 4 Cancer B Ecog 1 val 1 300 </code></pre> <p>When I <code>pivot</code> this df without passing the index, I get this:</p> <pre><code>In [1280]: df.pivot(index=None, columns = ['A', 'B', 'C', 'D', 'E', 'F']) Out[1280]: measure_m A 1 2 B 2 3 C 3 4 D Cancer A Cancer B Cancer A Cancer B E Ecog 9 Ecog 1 Ecog 0 Ecog 1 F val 6 val 1 val 0 val 1 0 100.0 NaN NaN NaN 1 NaN 200.0 NaN NaN 2 NaN NaN 500.0 NaN 3 NaN NaN NaN 300.0 </code></pre> <p>I want instead of <code>4 rows</code> just <code>1</code> single row with all values of <code>measure_m</code> column, like below:</p> <pre><code> measure_m A 1 2 B 2 3 C 3 4 D Cancer A Cancer B Cancer A Cancer B E Ecog 9 Ecog 1 Ecog 0 Ecog 1 F val 6 val 1 val 0 val 1 0 100.0 200.0 500.0 300.0 </code></pre> <p>How to go about this?</p>
<p>Do you mean:</p> <pre><code>df.set_index(list(df.columns[:-1])).T </code></pre> <p>Output:</p> <pre><code>A 1 2 B 2 3 C 3 4 D Cancer A Cancer B Cancer A Cancer B E Ecog 9 Ecog 1 Ecog 0 Ecog 1 F val 6 val 1 val 0 val 1 measure_m 100 200 500 300 </code></pre> <hr /> <p><strong>Update</strong> a little modification to match your output:</p> <pre><code>cols = ['A', 'B', 'C', 'D', 'E', 'F'] (df.set_index(cols) [['measure_m']] # only need this if you have more columns .unstack(level=cols) .to_frame().T ) </code></pre> <p>Output:</p> <pre><code> measure_m A 1 2 B 2 3 C 3 4 D Cancer A Cancer B Cancer A Cancer B E Ecog 9 Ecog 1 Ecog 0 Ecog 1 F val 6 val 1 val 0 val 1 0 100 200 500 300 </code></pre>
python|python-3.x|pandas|dataframe|multi-index
4
126
64,160,528
How to use cross validation in keras classifier
<p>I was practicing the keras classification for imbalanced data. I followed the official example:</p> <p><a href="https://keras.io/examples/structured_data/imbalanced_classification/" rel="nofollow noreferrer">https://keras.io/examples/structured_data/imbalanced_classification/</a></p> <p>and used the scikit-learn api to do cross-validation. I have tried the model with different parameter. However, all the times one of the 3 folds has value 0.</p> <p>eg.</p> <pre><code>results [0.99242424 0.99236641 0. ] </code></pre> <p>What am I doing wrong? How to get ALL THREE validation recall values of order &quot;0.8&quot;?</p> <h1>MWE</h1> <pre class="lang-py prettyprint-override"><code>%%time import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from sklearn.model_selection import train_test_split from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score from sklearn.model_selection import StratifiedKFold import os import random SEED = 100 os.environ['PYTHONHASHSEED'] = str(SEED) np.random.seed(SEED) random.seed(SEED) tf.random.set_seed(SEED) # load the data ifile = &quot;https://github.com/bhishanpdl/Datasets/blob/master/Projects/Fraud_detection/raw/creditcard.csv.zip?raw=true&quot; df = pd.read_csv(ifile,compression='zip') # train test split target = 'Class' Xtrain,Xtest,ytrain,ytest = train_test_split(df.drop([target],axis=1), df[target],test_size=0.2,stratify=df[target],random_state=SEED) print(f&quot;Xtrain shape: {Xtrain.shape}&quot;) print(f&quot;ytrain shape: {ytrain.shape}&quot;) # build the model def build_fn(n_feats): model = keras.models.Sequential() model.add(keras.layers.Dense(256, activation=&quot;relu&quot;, input_shape=(n_feats,))) model.add(keras.layers.Dense(256, activation=&quot;relu&quot;)) model.add(keras.layers.Dropout(0.3)) model.add(keras.layers.Dense(256, activation=&quot;relu&quot;)) model.add(keras.layers.Dropout(0.3)) # last layer is dense 1 for binary sigmoid model.add(keras.layers.Dense(1, activation=&quot;sigmoid&quot;)) # compile model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(1e-2), metrics=['Recall']) return model # fitting the model n_feats = Xtrain.shape[-1] counts = np.bincount(ytrain) weight_for_0 = 1.0 / counts[0] weight_for_1 = 1.0 / counts[1] class_weight = {0: weight_for_0, 1: weight_for_1} FIT_PARAMS = {'class_weight' : class_weight} clf_keras = KerasClassifier(build_fn=build_fn, n_feats=n_feats, # custom argument epochs=30, batch_size=2048, verbose=2) skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=SEED) results = cross_val_score(clf_keras, Xtrain, ytrain, cv=skf, scoring='recall', fit_params = FIT_PARAMS, n_jobs = -1, error_score='raise' ) print('results', results) </code></pre> <h1>Result</h1> <pre><code>Xtrain shape: (227845, 30) ytrain shape: (227845,) results [0.99242424 0.99236641 0. ] CPU times: user 3.62 s, sys: 117 ms, total: 3.74 s Wall time: 5min 15s </code></pre> <h1>Problem</h1> <p>I am getting the third recall as 0. I am expecting it of the order 0.8, how to make sure all three values are around 0.8 or more?</p>
<p>MilkyWay001,</p> <p>You have chosen to use <code>sklearn</code> wrappers for your model - they have benefits, but the model training process is hidden. Instead, I trained the model separately with validation dataset added. The code for this would be:</p> <pre><code>clf_1 = KerasClassifier(build_fn=build_fn, n_feats=n_feats) clf_1.fit(Xtrain, ytrain, class_weight=class_weight, validation_data=(Xtest, ytest), epochs=30,batch_size=2048, verbose=1) </code></pre> <p>In the <code>Model.fit()</code> output it is clearly seen that while loss metric goes down, recall is not stable. This lead to poor performance in CV reflected in zeros in CV results, as you observed.</p> <p>I fixed this by reducing learning rate to just 0.0001. While it is 100 times less than yours - it reaches 98% recall on train and 100% (or close) on test in just 10 epochs.</p> <p>Your code needs just one fix to achieve stable results: change LR to much lower one, like 0.0001:</p> <pre><code>optimizer=keras.optimizers.Adam(1e-4), </code></pre> <p>You can experiment with LR in the range &lt; 0.001. For reference, with LR <code>0.0001</code> I got:</p> <pre><code>results [0.99242424 0.97709924 1. ] </code></pre> <p>Good luck!</p> <p>PS: thanks for inluding compact and complete MWE</p>
python|pandas|tensorflow|keras|scikit-learn
1
127
33,055,070
"AttributeError: 'matrix' object has no attribute 'strftime'" error in numpy python
<p>I have a matrix with (72000, 1) dimension. This matrix involves timestamps.</p> <p>I want to use "strftime" as the following; <code>strftime("%d/%m/%y")</code>, in order to get the output something like this: <code>'11/03/02'</code>.</p> <p>I have such a matrix:</p> <pre><code>M = np.matrix([timestamps]) </code></pre> <p>And I have used "strftime" in order to convert all the matrix involving timestamps to a matrix involving dates in string types. For this reason, I have used "strftime" as the follwing:</p> <pre><code>M = M.strftime("%d/%m/%y") </code></pre> <p>When I run the code, I get this error: </p> <pre><code>AttributeError: 'matrix' object has no attribute 'strftime' </code></pre> <p>What is the right way of using this function? How can I convert the timestamp matrix to date string matrix?</p>
<p>As the error message shows you, you cannot do something like <code>matrix.strftime</code> . One thing you can do would be to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html" rel="nofollow"><code>numpy.apply_along_axis</code></a> . Example -</p> <pre><code>np.apply_along_axis((lambda x:[x[0].strftime("%d/%m/%y")]),1,M) </code></pre> <p>Demo -</p> <pre><code>In [58]: M = np.matrix([[datetime.datetime.now()]*5]).T In [59]: M.shape Out[59]: (5, 1) In [60]: np.apply_along_axis((lambda x:[x[0].strftime("%d/%m/%y")]),1,M) Out[60]: array([['10/10/15'], ['10/10/15'], ['10/10/15'], ['10/10/15'], ['10/10/15']], dtype='&lt;U8') </code></pre> <hr> <p>For the new error you are getting - </p> <blockquote> <p>"AttributeError: 'numpy.float64' object has no attribute 'strftime'"</p> </blockquote> <p>This means that the objects are not <code>datetime</code> objects, so if they are timestamps , you can converting them to datetime first. Example -</p> <pre><code>np.apply_along_axis((lambda x:[datetime.datetime.fromtimestamp(x[0]).strftime("%d/%m/%y")]),1,M) </code></pre>
python|numpy|matrix|strftime
2
128
38,839,402
how to use assert_frame_equal in unittest
<p>New to unittest package. I'm trying to verify the DataFrame returned by a function through the following code. Even though I hardcoded the inputs of <code>assert_frame_equal</code> to be equal (<code>pd.DataFrame([0,0,0,0])</code>), the unittest still fails. Anyone would like to explain why it happens?</p> <pre><code>import unittest from pandas.util.testing import assert_frame_equal class TestSplitWeight(unittest.TestCase): def test_allZero(self): #splitWeight(pd.DataFrame([0,0,0,0]),10) self.assert_frame_equal(pd.DataFrame([0,0,0,0]),pd.DataFrame([0,0,0,0])) suite = unittest.TestLoader().loadTestsFromTestCase(TestSplitWeight) unittest.TextTestRunner(verbosity=2).run(suite) </code></pre> <pre>Error: AttributeError: 'TestSplitWeight' object has no attribute 'assert_frame_equal'</pre>
<p>alecxe answer is incomplete, you can indeed use pandas' <code>assert_frame_equal()</code> with <code>unittest.TestCase</code>, using <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.addTypeEqualityFunc" rel="noreferrer"><code>unittest.TestCase.addTypeEqualityFunc</code></a></p> <pre class="lang-py prettyprint-override"><code>import unittest import pandas as pd import pandas.testing as pd_testing class TestSplitWeight(unittest.TestCase): def assertDataframeEqual(self, a, b, msg): try: pd_testing.assert_frame_equal(a, b) except AssertionError as e: raise self.failureException(msg) from e def setUp(self): self.addTypeEqualityFunc(pd.DataFrame, self.assertDataframeEqual) def test_allZero(self): self.assertEqual(pd.DataFrame([0,0,0,0]), pd.DataFrame([0,0,0,0])) </code></pre>
python|pandas|unit-testing|python-unittest
23
129
63,188,345
How can I remove the string element in a series in Python?
<p>I got a series name <code>basepay</code> that contains both String and Numeric element. What I wanted to do is to calculate the mean of the numeric part. I've tried <code>basepay.mean()</code> and the kernel return <code>TypeError: unsupported operand type(s) for +: 'float' and 'str'</code> So I tried to drop off the non-numeric part. I used <code>mask = basepay.astype(str).str.isnumeric()</code> to create a mask. But all the elements in the returning series are <code>False</code>.</p> <p>Shouldn't it return <code>True</code> when the element in the <code>basepay</code> is like <code>'1234.32'</code> ?</p> <p>By the way, is there a faster way to deal with this problem ?</p>
<p>it might be easiest to just use a try catch block inside the mask function like</p> <hr /> <pre><code>try: float(basepay) catch: do something if it fails </code></pre>
python|pandas|series
1
130
63,310,083
Dynamic range (bit depth) in PIL's fromarray() function?
<p>I did some image-processing on multi-frame TIFF images from a 12-bit camera and would like to save the output. However, the <a href="https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes" rel="nofollow noreferrer">PIL documentation</a> does not list a 12-bit mode for <code>fromarray()</code>. How does PIL handle bit depth and how can I ensure that the saved TIFF images will have the same dynamic range as the original ones?</p> <p>Example code:</p> <pre class="lang-python prettyprint-override"><code>import os import numpy as np import matplotlib.pyplot as plt from PIL import Image # Read image file names pathname = '/home/user/images/' filenameList = [filename for filename in os.listdir(pathname) if filename.endswith(('.tif', '.TIF', '.tiff', '.TIFF'))] # Open image files, average over all frames, save averaged image files for filename in filenameList: img = Image.open(pathname + filename) X, Y = img.size NFrames = img.n_frames imgArray = np.zeros((Y, X)) for i in range(NFrames): img.seek(i) imgArray += np.array(img) i += 1 imgArrayAverage = imgArray/NFrames imgAverage = Image.fromarray(imgArrayAverage) # &lt;=== THIS!!! imgAverage.save(pathname + filename.rsplit('.')[0] + '.tif') img.close() </code></pre>
<p>In my experience, 12-bit images get opened as 16-bit images with the first four MSB as all zeroes. My solution has been to convert the images to numpy arrays using</p> <pre><code>arr = np.array(img).astype(np.uint16) </code></pre> <p>the astype() directive is probably not strictly necessary, but it seems like it's a good idea. Then to convert to 16-bit, shift your binary digits four to the left:</p> <pre><code>arr = np.multiply(arr,2**4) </code></pre> <p>If you want to work with 8-bit instead,</p> <pre><code>arr = np.floor(np.divide(arr,2**4)).astype(np.uint8) </code></pre> <p>where here the astype() is necessary to force conversion to 8-bit integers. I think that the 8-bit truncation implicitly performs the floor() function but I left it in just in case.</p> <p>Finally, convert back to PIL Image object and you're good to go:</p> <pre><code>img = Image.fromarray(arr) </code></pre> <p>For your specific use-case, this would have the same effect:</p> <pre><code>imgAverage = Image.fromarray(imgarrayAverage.astype(np.uint16) * 2**4) </code></pre> <p>The type conversion again may not be necessary but it will probably save you time since dividing imgArray by NFrames should implicity result in an array of floats. If you're worried about precision, it could be omitted.</p>
arrays|numpy|save|python-imaging-library|bit-depth
0
131
63,138,239
Convert Matlab struct to python/numpy
<p>I have a small snippet of matlab code I would like to translate the python/numpy</p> <pre><code>for i = 1:numel(order) %This puts all output data into one variable, alongside the scan length %and separation plotout = [plotout; resout(i).output ... repmat((i-1)*separation,[length(resout(i).output) 1]) ... transpose(0:0.004712:(length(resout(i).output)*0.004712)-0.004712)]; end </code></pre> <p>I have made an attempt using np.matlibrepmat in replacment of repmat, however I am unsure how to continue specifically with the last line <code>transpose(0:0.004712:(length(resout(i).output)*0.004712)-0.004712)];</code></p> <p>Any help would be greatly appreciated</p>
<p>That last line produces a column vector to be appended along side the other column vectors.</p> <p>The code</p> <p><code>(0 : 0.004712 : (length(resout(i).output)*0.004712)) - 0.004712</code></p> <p>counts from <code>0</code> to <code>(length(resout(i).output)*0.004712)</code> at a step size of <code>0.004712</code>, then subtracts <code>0.004712</code> from each element.</p> <p>This is equivalent to something like:</p> <p><code>np.arange(0, (len(resout(i).output)*0.004712), 0.004712) - 0.004712</code></p> <p>which is a row vector of some size <code>[1,N]</code>. The resulting row vector is then transposed into a column vector <code>[N,1]</code>.</p> <p>In numpy this can be done by something like</p> <pre><code>A = np.arange(0, (len(resout(i).output)*0.004712), 0.004712) - 0.004712 np.reshape(A, (len(A), 1)) </code></pre>
python|matlab|numpy
0
132
31,732,415
df.loc filtering doesn't work with None values
<p>Why does this filtering not work when the filter is <code>Project ID</code> == None? I also noticed <code>is None</code> rather than <code>== None</code> returns <code>KeyError: False</code> </p> <pre><code>import pandas as pd df = pd.DataFrame(data = [['Project1', 'CT', 800], [None, 3, 1000], ['Project3', 'CA', 20]], columns=['Project ID', 'State', 'Cost']) print df.loc[df['Project ID'] == 'Project1'].values print df.loc[df['Project ID'] == None].values </code></pre> <p>output:</p> <pre><code>[['Project1' 'CT' 800L]] [] </code></pre>
<p>You have to use <code>isnull</code> for this:</p> <pre><code>In [3]: df[df['Project ID'].isnull()] Out[3]: Project ID State Cost 1 None 3 1000 </code></pre> <p>Or use <code>apply</code>:</p> <pre><code>In [5]: df.loc[df['Project ID'].apply(lambda x: x is None)] Out[5]: Project ID State Cost 1 None 3 1000 </code></pre>
python|python-2.7|pandas
5
133
41,295,405
pandas change list of value into column
<p>I have a df like this and I want to change the list of value into column</p> <p>```</p> <pre><code> uid device 0 000 [1.0, 3.0] 1 001 [3.0] 2 003 [nan] 3 004 [2.0, 3.0] 4 005 [1.0] 5 006 [1.0] 6 006 [nan] 7 007 [2.0] ``` </code></pre> <p>should be</p> <p>```</p> <pre><code> uid device NA just_1 just_2or3 Both 0 000 [1.0, 3.0] 0 0 0 1 1 001 [3.0] 0 0 1 0 2 003 [nan] 1 0 0 0 3 004 [2.0, 3.0] 0 0 "1" 0 4 005 [1.0] 0 1 0 0 5 006 [1.0] 0 1 0 0 6 006 [nan] 1 0 0 0 7 007 [2.0] 0 1 1 0 8 008 [1.0, 2.0] 0 0 0 1 </code></pre> <p>```</p> <p>I want to change to dummy variable, if device only 1.0, set corresponding column value = 1, if 2.0, 3.0, [2.0,3.0],set just_2or3 = 1.</p> <p>Only if 1.0 in list, like [1.0,3.0],[1.0,2.0],set both = 1</p> <p>How can I do that? thank you</p>
<p>You can use custom function <code>f</code> with list comprehensions, last cast <code>boolean</code> values to <code>int</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html" rel="nofollow noreferrer"><code>astype</code></a>:</p> <pre><code>df = pd.DataFrame({'uid':['000','001','002','003','004','005','006','007'], 'device':[[1.0,3.0],[3.0],[np.nan],[2.0,3.0], [1.0],[1.0],[np.nan],[2.0]]}) print (df) device uid 0 [1.0, 3.0] 000 1 [3.0] 001 2 [nan] 002 3 [2.0, 3.0] 003 4 [1.0] 004 5 [1.0] 005 6 [nan] 006 7 [2.0] 007 def f(x): #print (x) NA = [np.nan in x][0] just_1 = [1 in x and not(2 in x or 3 in x)][0] both = [1 in x and (2 in x or 3 in x)][0] just_2or3 = [1 not in x and (2 in x or 3 in x)][0] return pd.Series([NA, just_1, just_2or3, both], index=['NA','just_1','just_2or3', 'both']) print (df.set_index('uid').device.apply(f).astype(int).reset_index()) uid NA just_1 just_2or3 both 0 000 0 0 0 1 1 001 0 0 1 0 2 002 1 0 0 0 3 003 0 0 1 0 4 004 0 1 0 0 5 005 0 1 0 0 6 006 1 0 0 0 7 007 0 0 1 0 </code></pre>
python|pandas
1
134
41,339,701
Python - change header color of dataframe and save it to excel file
<p>I have a dataframe <code>df</code> where i want to change the header background color, apply borders and save it excel file in .xlsx extension. </p> <p>I have tried styleframe, some functionalities in openpyxl and tried to write udf s, But nothing seemed to work.</p>
<p>Here is the solution using <a href="https://github.com/DeepSpace2/StyleFrame" rel="nofollow noreferrer">StyleFrame</a> package that you mentioned.</p> <pre><code>import pandas as pd from styleframe import StyleFrame, Styler, utils df = pd.DataFrame({'a': [1, 2, 3], 'b': [1, 2, 3]}) sf = StyleFrame(df) sf.apply_headers_style(styler_obj=Styler(bold=True, bg_color=utils.colors.green, border_type=utils.borders.medium)) sf.to_excel('output.xlsx').save() </code></pre> <p>I would recommend you to make sure that you have the lastest version of StyleFrame installed.</p> <pre><code>pip install -U styleframe </code></pre>
python|excel|python-2.7|pandas
3
135
61,370,341
Remove a slice of seconds from every minute in pandas
<p>I was wondering how it is possible to remove a slice of time from bigger time unit. Let us say we have a dataset from a day and we want to remove the first 10 seconds of every minute from this day. How can I do this in Pandas or Numpy?</p> <p>The example shows values in a range of 15 min and the values between 06 am and 10 am are deleted. This should happen for everyday in the dataset. I hope you can help me.</p> <pre><code>Before: 2019-01-01 05:15:00 0.0 2019-01-01 05:30:00 0.0 2019-01-01 05:45:00 0.0 2019-01-01 06:00:00 0.0 2019-01-01 06:15:00 0.0 After: 2019-01-01 05:15:00 0.0 2019-01-01 05:30:00 0.0 2019-01-01 05:45:00 0.0 2019-01-01 10:15:00 0.0 2019-01-01 10:30:00 0.0 </code></pre> <p>Thank you.</p> <p>EDIT:</p> <p>I tried this and it worked:</p> <pre><code>#The actual deleting of the rows between 6am and 10 am def delete_row_by_time(df, day): from_ts = day + ' 06:00:00' to_ts = day + ' 10:00:00' df = df[(df.index &lt; from_ts) | (df.index &gt; to_ts)] return df #Get the actual days days = eins.index.strftime('%Y-%m-%d').unique() days = pd.to_datetime(days) start_date = days.min() end_date = days.max() delta = datetime.timedelta(days=1) #iterate through all days in dataset while start_date &lt;= end_date: print(start_date) df = delete_row_by_time(df, str(start_date)) start_date += delta </code></pre> <p>Maybe there are some improvements to make.</p>
<p>The previous solutions were not going to work because you don't have a DateTime column but a DateTimeIndex so the syntax is a bit different.</p> <p>Your solution works, however, this can be solved by using a pandas function that vectorizes so you don't have to go day by day in a <code>for/while</code> loop</p> <pre><code>from datetime import datetime np.random.seed(0) index = pd.date_range(datetime.now(), freq='15T', periods=1000) sample_data = np.random.rand(1000) df = pd.DataFrame(dict(data=sample_data), index=index) df = df[(df.index.hour &lt; 6 ) | ((df.index.hour &gt;= 10) &amp; (df.index.minute &gt; 0))] df.iloc[20:26] # data # 2020-04-22 05:00:00 0.978618 # 2020-04-22 05:15:00 0.799159 # 2020-04-22 05:30:00 0.461479 # 2020-04-22 05:45:00 0.780529 # 2020-04-22 10:15:00 0.437032 # 2020-04-22 10:30:00 0.697631 </code></pre> <p>This solution will delete every hour between 6 AM and 10 AM including 10:00:00</p>
python|pandas|algorithm|time-series
0
136
61,185,360
The Tensorflow model can't completely delet and still occupy the CPU memory
<p>I'm working with optimize the the neural network architecture and hyperameters. For this reason, I build a for loop to sent in the hyperameters and build/train/evaluate a new model through each iteration. The example like that:</p> <pre><code>for k in range(10): #full_model() function is used to build the new model with #hyperparameters l1,l2,l3 md=full_model(l1,l2,l3) md.compile(optimizer='SGD',loss='categorical_crossentropy',metrics=['accuracy']) md.fit(trads,validation_data=vds,epochs=3) teloss,teacc=md.evaluate(teds) </code></pre> <p>and I try to completely remove the created model and free the occupied CPU memory after evaluation in the loop by add following code in the loop:</p> <pre><code>del md gc.collect() tf.keras.backend.clear_session() tf.compat.v1.reset_default_graph() </code></pre> <p>But I observe that the CPU memory will not free after add above code inside the loop and the usage of memory keep increasing while iteration. Finally, the process will be killed by system due to leak memory. </p> <p>By the way, I have used some custom layers which save the sublayers and tensors inside the list. this kind of custom layers also be contained in a list during build whole model. I'm not sure wheter it is one of reason to cause this problem.The example persudo code like that:</p> <pre><code>class custom_layer(tf.keras.layers.Layer): def __init__(self): self.layer_li=[layers.conv(),layers.Maxpool2d()....] ... def call(self,inputs): self.out1,self.out2=self.layer_li[0](inputs),self.layer_li[1(inputs) return [self.out1,self.out2] class build_model(tf.keras.Model): def __init__(self): sub_layers_list=[sublayer_1(),sublayer2...] def call(self,inputs): self.x=self.sub_layers_list[0](inputs) for k in range(1,len(sub_layers_list)): self.out=sub_layers_list[k](self.out) return self.out </code></pre> <p>Can anyone help me to work this way without leak of memory? Thank in advance for any help!</p> <p><strong>edited</strong> my code run in tensorflow2.1 with ubuntu 16.04 </p>
<p>Finally, this problem is resolved by change the OS to Windows. If anyone have wiser way to deal it in Ubuntu, welcome to give some suggestion or comment.</p>
python|tensorflow|keras|tensorflow2.0
0
137
68,792,428
How can I parallelize np.matmul and np.multiply?
<p>I have a question about matrix calculation using numpy. How can I parallelize these calculation such as <code>np.matmul</code> and <code>np.multiply</code>? I cannot find any references describing how to compute np.matmul using parallelization.</p> <pre><code>def time_shift_R(V, R_1, I0, t): # V is the potential function which returns an array temp1 = V(xx, yy, t) + B*I0**2 temp = P*np.matmul(M, I0) + Q*np.matmul(I0, M) - np.multiply(temp1, I0) R1 = ( R_1 - dt*temp ) / ( 1 - dt*B*R_1*I0 ) return R1 </code></pre> <p>I appreciate your kind help in advance.</p>
<p>You might want to do some time tests to see what exactly is taking most time. For example on a rather modest machine with stock Ubuntu linux:</p> <p>Make a complex array (you didn't cite any sizes so I'm just guessing as to something reasonable):</p> <pre><code>In [60]: A = np.ones((1000,1000),complex) </code></pre> <p><code>multiply</code> and the operator * are basically the same:</p> <pre><code>In [61]: timeit A*A 7.49 ms ± 4.99 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [62]: timeit np.multiply(A,A) 7.48 ms ± 6.72 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p><code>matmul</code> is quite a bit longer, but then it's doing a lot more. A faster BLAS equivalent might help here. Note <code>matmul</code> is smart enough to use a specialized BLAS function for the transpose case. The @ operator is basically the same.</p> <pre><code>In [63]: timeit np.matmul(A,A) 381 ms ± 8.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [64]: timeit np.matmul(A,A.T) 231 ms ± 9.84 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p>and the power calc:</p> <pre><code>In [65]: timeit 10.0*A**2 14.4 ms ± 1.19 ms per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p>The <code>V(xx, yy, t)</code> is unknown.</p>
python|numpy|parallel-processing
0
138
36,395,030
How do I count the frequency against a specific list?
<p>I have a <code>DataFrame</code> that looks like this.</p> <pre><code> date name 0 2015-06-13 00:21:25 a 1 2015-06-13 01:00:25 b 2 2015-06-13 02:54:48 c 3 2015-06-15 14:38:15 a 4 2015-06-15 15:29:28 b </code></pre> <p>I want to count the occurrences of dates against a specific date range, including ones that do not appear in the column (and ignores whatever that is in the <code>name</code> column). For example, I might have a date range that looks like this:</p> <pre><code>periods = pd.date_range('2015-06-13', '2015-06-16', freq = 'd') </code></pre> <p>Then, I want an output that looks something like:</p> <pre><code>date count 2015-06-13 3 2015-06-14 0 2015-06-15 2 2015-06-16 0 </code></pre> <p>I haven't been able to find any function that let me keep the <code>0</code> rows.</p>
<p>I think you can first use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow"><code>date</code></a> from column <code>date</code> for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html" rel="nofollow"><code>reindex</code></a> by <code>periods</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> by <code>0</code>. Last convert <code>float</code> to <code>int</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>df = df['date'].dt.date.value_counts() print df 2015-06-13 3 2015-06-15 2 Name: date, dtype: int64 periods = pd.date_range('2015-06-13', '2015-06-16', freq = 'd') df = df.reindex(periods).fillna(0).astype(int).reset_index() df.columns = ['date','count'] print df date count 0 2015-06-13 3 1 2015-06-14 0 2 2015-06-15 2 3 2015-06-16 0 </code></pre>
python|pandas|dataframe
2
139
36,646,854
Combining regplot with piecewise linear regression on a Facetgrid with seaborn
<p>I want to plot on a grid my data with associated errorbars and a piecewise linear regression through the mean of each timepoint. I have my data in a pandas dataframe and would like to us seaborn to do the job. </p> <p>If I use seaborns factorplot I get close.</p> <pre><code>g = sns.factorplot(x="Time", y='value', hue="Name", col="PEAK", data=meltdf, size=4, aspect=1.0,col_wrap=3,sharey=False,scale=0.7) </code></pre> <p><a href="http://i.stack.imgur.com/NPcGh.png" rel="nofollow">output for the factorplot</a></p> <p>But notice that my xaxis is not scaled correctly(this makes sense since the factorplot is designed for categorical comparisons)</p> <p>If I instead create a FacetGrid and map regplot and plt.plot onto the grid I get correct spacing on the xaxis and keep error bars etc. but the linear regression is not how I want it</p> <pre><code>meltdf = pd.melt(Conc_norm.drop(['GLC','pan','Ratio %'],axis=1), id_vars=['Name','Time'], var_name='PEAK') g = sns.FacetGrid(meltdf, col="PEAK",hue='Name', col_wrap=4,sharey=False) g.map(sns.regplot, "Time", "value",fit_reg=False, x_estimator=np.mean); g.map(plt.plot, "Time", "value"); </code></pre> <p><a href="http://i.stack.imgur.com/Sgndy.png" rel="nofollow">output for the Facetgrid mapped with regplot and plt.plot</a></p> <p>Now comes the question: How do I plot a piecewise linear regression between the points in the plot?</p> <p>Thanks,</p>
<p>After trawling the net and reading many of mwaskom's excellent answers it seems I have found a working solution</p> <pre><code>def _plotmean(x, *args, **kwargs): ax = plt.gca() data = kwargs.pop('data') data = data.groupby(x).mean() data.plot(ax=ax, **kwargs) Conc_norm.sort_values('Time', inplace=True) meltdf = pd.melt(Conc_norm.drop(['GLC','pan','Ratio %'], axis=1), id_vars=['Name','Time'], var_name='PEAK') g = sns.FacetGrid(meltdf, col="PEAK", hue='Name', col_wrap=3,sharey=False) g.map(sns.regplot, "Time", "value", fit_reg=False, x_estimator=np.mean) g.map_dataframe(_plotmean, "Time") g.add_legend() </code></pre> <p><a href="https://i.stack.imgur.com/0GnNs.png" rel="nofollow noreferrer">working output</a></p>
python|pandas|seaborn
2
140
65,829,670
AttributeError: 'DataFrame' object has no attribute 'to_CSV'
<p>I'm trying to store my extracted chrome data into a csv format using df.to_CSV</p> <p>here is my code :</p> <pre><code>content = driver.page_source soup = BeautifulSoup(content) for a in soup.findAll('a',href=True, attrs={'class':'_13oc-S'}): name=a.find('div', attrs={'class':'_4rR01T'}) price=a.find('div', attrs={'class':'_30jeq3 _1_WHN1'}) rating=a.find('div', attrs={'class':'hGSR34 _2beYZw'}) products.append(name.text) prices.append(price.text) ratings.append(rating.text) </code></pre> <pre><code>df = pd.DataFrame({'Product Name':products, 'Price':prices, 'Rating':ratings}) df.to_CSV(r'C:\Users\Krea\Documents\products.csv', index=False) </code></pre>
<p>It's case-sensitive, should be <code>df.to_csv(...)</code></p>
python|pandas|dataframe|selenium|beautifulsoup
1
141
65,647,405
Create a function for a number of lists and correctly group the list elements
<p>I have 3 lists, but sometimes only 2, that each contain 4 multi-index dataframes.</p> <pre><code>list1=[df1, df2, df3, df4] list2=[df1_, df2_, df3_, df4_] list3=[df1__, df2__, df3__, df4__] </code></pre> <p>The next step is to create multi-index dataframes:</p> <pre><code>reportTable1 = list1[0].round(2) #this dataframe is equal to list1[0], In other words &quot;df1&quot;. reportTable2 = pd.concat([list1[1], list2[1], list3[1]], axis = 0).round(2) #these dataframes have different columns. reportTable3 = pd.concat([list1[2], list2[2], list3[2]], axis = 0).round(2) #they have same columns. reportTable4 = pd.concat([list1[3], list2[3], list3[3]], axis = 0).applymap('{:.2%}'.format) #they have same columns. </code></pre> <p>Firstly, I want to define a function for these steps with cleaner code.</p> <p>My main problem is that in some cases <code>list3</code> doesn't exit. In this situation, I do not want to get error. How can I run this code in cases where <code>list3</code> is not available?</p>
<ul> <li>The following function will work if there are only two <code>lists</code> of <code>dataframes</code>. <ul> <li>The <code>lists</code> of <code>dataframes</code> are passed to the function as <a href="https://realpython.com/python-kwargs-and-args/#using-the-python-args-variable-in-function-definitions" rel="nofollow noreferrer"><code>*args</code></a>, so any number of <code>lists</code> can be passed to the function.</li> </ul> </li> <li>The type annotations indicate the <code>*args</code> are a <code>list</code> of <code>dataframes</code>, and a <code>tuple</code> of <code>dataframes</code> is returned by the function.</li> <li><code>list(zip(*v))</code> is used to create the correct groups of <code>dataframes</code> for <code>pandas.concat</code>.</li> <li>The number of tables, <code>t#</code>, returned by the function corresponds to the number of <code>dataframes</code> in the <code>lists</code>. <ul> <li>If the <code>lists</code> contain more than the number of dataframes shown (e.g. <code>df5</code>, <code>df5_</code> and <code>df5__</code>, then add a line of code to the function for <code>t5</code> and <code>return</code> it.</li> </ul> </li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd from typing import List, Tuple # for type annotations # function to create report tables def my_func(*v: List[pd.DataFrame]) -&gt; Tuple[pd.DataFrame]: l = list(zip(*v)) # use zip to combine the dataframes into the correct groups t1 = l[0][0].round(2) # l[0] = (df1, df1_, df1__) → l[0][0] = df1 t2 = pd.concat(l[1]).round(2) # l[1] = (df2, df2_, df2__) t3 = pd.concat(l[2]).round(2) # l[2] = (df3, df3_, df3__) t4 = pd.concat(l[3]).applymap('{:.2%}'.format) # l[3] = (df4, df4_, df4__) return t1, t2, t3, t4 # data l1 = [df1, df2, df3, df4] l2 = [df1_, df2_, df3_, df4_] l3 = [df1__, df2__, df3__, df4__] # function call with 3 lists rt1, rt2, rt3, rt4 = my_func(l1, l2, l3) # function call with 2 lists rt1, rt2, rt3, rt4 = my_func(l1, l2) </code></pre>
python|pandas|function|dataframe
0
142
2,890,618
What to beware of reading old Numarray tutorials and examples?
<p>Python currently uses Numpy for heavy duty math and image processing. </p> <p>The earlier Numeric and Numarray are obsolete, but still today there are many tutorials, notes, sample code and other documentation using them. Some of these cover special topics of interest, some are well written but haven't been updated or replaced, or are otherwise of use. Quite a bit is the same between Numeric, Numarray and Numpy, so I usually get good mileage out these older docs. Ocassionaly, though, I run into a line of code that results in error. Not often enough to remember how to get around it, but usually I figure it out at the cost of some time.</p> <p>What are the main things to watch out for when relying on such older documentation for current Numpy use? Is there a list of how to translate the differences that exist? </p>
<p>Two good resources:</p> <ol> <li><a href="http://www.stsci.edu/resources/software_hardware/numarray/numarray2numpy.pdf" rel="nofollow noreferrer">Numarray to numpy guide</a></li> <li><a href="http://www.scipy.org/Converting_from_Numeric" rel="nofollow noreferrer">Differences between Numeric and numpy</a></li> </ol>
numpy
3
143
63,708,496
How to extract document embeddings from HuggingFace Longformer
<p>Looking to do something similar to</p> <pre class="lang-py prettyprint-override"><code>tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') input_ids = torch.tensor(tokenizer.encode(&quot;Hello, my dog is cute&quot;)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple </code></pre> <p>(from <a href="https://github.com/huggingface/transformers/issues/1950" rel="noreferrer">this thread</a>) using the longformer</p> <p>the documentation example seems to do something similar, but is confusing (esp. wrt. how to set the attention mask, I assume I'd want to set it to the <code>[CLS]</code> token, the example sets global attention to random values I think)</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import torch &gt;&gt;&gt; from transformers import LongformerModel, LongformerTokenizer &gt;&gt;&gt; model = LongformerModel.from_pretrained('allenai/longformer-base-4096', return_dict=True) &gt;&gt;&gt; tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') &gt;&gt;&gt; SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document &gt;&gt;&gt; input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1 &gt;&gt;&gt; # Attention mask values -- 0: no attention, 1: local attention, 2: global attention &gt;&gt;&gt; attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention &gt;&gt;&gt; attention_mask[:, [1, 4, 21,]] = 2 # Set global attention based on the task. For example, ... # classification: the &lt;s&gt; token ... # QA: question tokens ... # LM: potentially on the beginning of sentences and paragraphs &gt;&gt;&gt; outputs = model(input_ids, attention_mask=attention_mask) &gt;&gt;&gt; sequence_output = outputs.last_hidden_state &gt;&gt;&gt; pooled_output = outputs.pooler_output </code></pre> <p>(from <a href="https://huggingface.co./transformers/model_doc/longformer.html" rel="noreferrer">here</a>)</p>
<p>You wouldn't need to mess with those values (unless you want to optimize the way longformer attends to different tokens). In the example you've listed above it will enforce global attention to just the 1st, 4th and 21st token. They've put random numbers here but sometimes you might want to globally attend for a certain type of tokens such as the question tokens in a sequence of tokens (ex: &lt;question tokens&gt; + &lt;answer tokens&gt; but only globally attend the first part).</p> <p>If you're looking for just embeddings you can follow what's been discussed <a href="https://stackoverflow.com/questions/64217601/the-last-layers-of-longformer-for-document-embeddings">here :</a><a href="https://stackoverflow.com/questions/64217601/the-last-layers-of-longformer-for-document-embeddings">The last layers of longformer for document embeddings</a>.</p>
huggingface-transformers
1
144
63,323,045
find duplicated csv columns from list [python pandas]
<p>I want to find duplicate columns from a list, so not just any columns.</p> <p>example of correct csv looks like this:</p> <pre><code>col1, col2, col3, col4, custom, custom 1,2,3,4,test,test 4,3,2,1,test,test </code></pre> <p>list looks like this:</p> <pre><code>columnNames = ['col1', 'col2', 'col3', 'col4'] </code></pre> <p>So when I run something like <code>df.columns.duplicated()</code> I don't want to it detect the duplicate 'custom' fields, only if there is more than one 'col1' column, or more than one 'col2' column, etc, and return True when one of those columns is found to be duplicated.</p> <p>I found when including a duplicate 'colN' column name, col4 in the example, and I print it out, it shows me that <code>index(['col1', 'col2', 'col3', 'col4', 'col4.1'], dtype='object')</code></p> <p>No idea how to write that line of code.</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.isin.html" rel="nofollow noreferrer"><code>Index.isin</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.duplicated.html" rel="nofollow noreferrer"><code>Index.duplicated</code></a> to create a boolean mask:</p> <pre><code>c = df.columns.str.rsplit('.', n=1).str[0] mask = c.isin(columnNames) &amp; c.duplicated() </code></pre> <p>If want to find duplicated column names use <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="nofollow noreferrer"><code>boolean indexing</code></a> with this <code>mask</code>:</p> <pre><code>dupe_cols = df.columns[mask] </code></pre>
python|python-3.x|pandas|dataframe|csv
1
145
24,755,012
Pandas Dataframe count availability of string in a list
<p>Lets say I have a Pandas <code>DataFrame</code> like following.</p> <pre><code>In [31]: frame = pd.DataFrame({'a' : ['A/B/C/D', 'A/B/C', 'A/E','D/E/F']}) In [32]: frame Out[32]: a 0 A/B/C/D 1 A/B/C 2 A/E 3 D/E/F </code></pre> <p>And I have string list like following.</p> <pre><code>In [33]: mylist =['A/B/C/D', 'A/B/C', 'A/B'] </code></pre> <p>Here two of the patterns in mylist is available in my DataFrame. So I need to get output saying 2/3*100 = 67%</p> <pre><code>In [34]: pattern = '|'.join(mylist) In [35]: frame.a.str.contains(pattern).count() </code></pre> <p>This is not working. Any help to get my expected output.</p>
<p>You can do this way :</p> <pre><code>In [1]: len(frame[frame.a.isin(mylist)])/float(len(mylist)) * 100 Out[1]: 66.66666666666666 </code></pre> <p>Or with you method :</p> <pre><code>In [2]: pattern = '|'.join(mylist) In [2]: count = frame.a.str.contains(pattern).sum() # will add up True values In [3]: count/float(len(mylist))*100 Out[3]: 66.666666666666 </code></pre>
python|list|pandas
1
146
24,754,496
Pandas: Merge hierarchical data
<p>I am looking for a way to merge data that has a complex hierarchy into a pandas <code>DataFrame</code>. This hierarchy comes about by different inter-dependencies within the data. E.g. there are parameters which define how the data was produced, then there are time-dependent observables, spatially dependent observables, and observables that depend on both time and space.</p> <p>To be more explicit: Suppose that I have the following data.</p> <pre><code># Parameters t_max = 2 t_step = 15 sites = 4 # Purely time-dependent t = np.linspace(0, t_max, t_step) f_t = t**2 - t # Purely site-dependent position = np.array([[0, 0], [1, 0], [0, 1], [1, 1]]) # (x, y) site_weight = np.arange(sites) # Time-, and site-dependent. occupation = np.arange(t_step*sites).reshape((t_step, sites)) # Time-, and site-, site-dependent correlation = np.arange(t_step*sites*sites).reshape((t_step, sites, sites)) </code></pre> <p>(In the end I would, of course, have many of these sets of data. One for each set of parameters.)</p> <p>Now, I would like to stash all this into a pandas <code>DataFrame</code>. I imagine the final result to look something like this:</p> <pre><code>| ----- parameters ----- | -------------------------------- observables --------------------------------- | | | | ---------- time-dependent ----------- | | | ----------- site-dependent --- ) ( ------------------------ | | | | | - site2-dependent - | | | sites | t_max | t_step | site | r_x | r_y | site weight | site2 | correlation | occupation | f_t | time | </code></pre> <p>I suppose that the partially overlapping hierarchies may be impossible to achieve. It's okay if they are implicit, in the sense that I can get e.g. all site-dependent data by indexing the DataFrame in a specific way.</p> <p>Also, please feel free to tell me if you think that there is a better way of arranging this data in Pandas.</p> <h3>Question</h3> <p>How can I construct a <code>DataFrame</code> that contains all the above data, and somehow reflects the inter-dependencies (e.g. <code>f_t</code> depends on <code>time</code>, but not on <code>site</code>). And all that in a way that is sufficiently generic, such that it is easy to add, or remove certain observables, with possibly new inter-dependencies. (E.g. a quantity that depends on a second time-axis, like a time-time-correlation.)</p> <hr> <h2>What I got so far</h2> <p>In the following I will show you how far I've gotten on my own. However, I don't think that this is the ideal way of achieving the above. Especially, since it lacks generality with respect to adding, or removing certain observables.</p> <h3>Indices</h3> <p>Given the above data I started out by defining all the multi-indices that I am going to need.</p> <pre><code>ind_time = pd.Index(t, name='time') ind_site = pd.Index(np.arange(sites), name='site') ind_site_site = pd.MultiIndex.from_product([ind_site, ind_site], names=['site', 'site2']) ind_time_site = pd.MultiIndex.from_product([ind_time, ind_site], names=['time', 'site']) ind_time_site_site = pd.MultiIndex.from_product([ind_time, ind_site, ind_site], names=['time', 'site', 'site2']) </code></pre> <h3>Individual <code>DataFrame</code>s</h3> <p>Next, I created data-frames of the individual chunks of data.</p> <pre><code>df_parms = pd.DataFrame({'t_max': t_max, 't_step': t_step, 'sites': sites}, index=[0]) df_time = pd.DataFrame({'f_t': f_t}, index=ind_time) df_position = pd.DataFrame(position, columns=['r_x', 'r_y'], index=ind_site) df_weight = pd.DataFrame(site_weight, columns=['site weight'], index=ind_site) df_occupation = pd.DataFrame(occupation.flatten(), index=ind_time_site, columns=['occupation']) df_correlation = pd.DataFrame(correlation.flatten(), index=ind_time_site_site, columns=['correlation']) </code></pre> <p>The <code>index=[0]</code> in <code>df_parms</code> seems necessary because otherwise Pandas complains about scalar values only. In reality I would probably replace it by a time-stamp of when this particular simulation was run. That would at least convey some useful information.</p> <h3>Merge Observables</h3> <p>With the data-frames available, I join all the observables into one big <code>DataFrame</code>.</p> <pre><code>df_all_but_parms = pd.merge( pd.merge( pd.merge( df_time.reset_index(), df_occupation.reset_index(), how='outer' ), df_correlation.reset_index(), how='outer' ), pd.merge( df_position.reset_index(), df_weight.reset_index(), how='outer' ), how='outer' ) </code></pre> <p>This is the bit that I like the least in my current approach. The <code>merge</code> function only works on pairs of data-frames, and it requires them to have at least one common column. So, I have to be careful about the order of joining my data-frames, and if I were to add an orthogonal observable then I could not merge it with the other data because they would not share a common column. Is there one function available that can achieve the same result with just one single call on a list of data-frames? I tried <code>concat</code> but it wouldn't merge common columns. So, I ended up with lots of duplicate <code>time</code>, and <code>site</code> columns.</p> <h3>Merge All Data</h3> <p>Finally, I merge my data with the parameters.</p> <pre><code>pd.concat([df_parms, df_all_but_parms], axis=1, keys=['parameters', 'observables']) </code></pre> <p>The end-result, so far, looks like this:</p> <pre><code> parameters observables sites t_max t_step time f_t site occupation site2 correlation r_x r_y site weight 0 4 2 15 0.000000 0.000000 0 0 0 0 0 0 0 1 NaN NaN NaN 0.000000 0.000000 0 0 1 1 0 0 0 2 NaN NaN NaN 0.000000 0.000000 0 0 2 2 0 0 0 3 NaN NaN NaN 0.000000 0.000000 0 0 3 3 0 0 0 4 NaN NaN NaN 0.142857 -0.122449 0 4 0 16 0 0 0 .. ... ... ... ... ... ... ... ... ... ... ... ... 235 NaN NaN NaN 1.857143 1.591837 3 55 3 223 1 1 3 236 NaN NaN NaN 2.000000 2.000000 3 59 0 236 1 1 3 237 NaN NaN NaN 2.000000 2.000000 3 59 1 237 1 1 3 238 NaN NaN NaN 2.000000 2.000000 3 59 2 238 1 1 3 239 NaN NaN NaN 2.000000 2.000000 3 59 3 239 1 1 3 </code></pre> <p>As you can see this does not work very well, since only the first row is actually assigned the parameters. All the other rows just have <code>NaN</code>s in place of the parameters. But, since these are the parameters of all of that data, they should also be contained in all the other rows of this data-frame.</p> <p>As a small side question: How smart would pandas be if I were to store the above data-frame in hdf5. Would I end up with lots of duplicated data, or would it avoid duplicate storage?</p> <hr> <h2>Update</h2> <p>Thanks to <a href="https://stackoverflow.com/a/24758685/841562">Jeff's answer</a> I was able to push all my data into one data-frame with a generic merge. The basic idea is, that all my observables already have a few common columns. Namely, the parameters.</p> <p>First I add the parameters to all my observables' data-frames.</p> <pre><code>all_observables = [ df_time, df_position, df_weight, df_occupation, df_correlation ] flat = map(pd.DataFrame.reset_index, all_observables) for df in flat: for c in df_parms: df[c] = df_parms.loc[0,c] </code></pre> <p>And then I can merge all of them together by reduction.</p> <pre><code>df_all = reduce(lambda a, b: pd.merge(a, b, how='outer'), flat) </code></pre> <p>The result of which has the desired form:</p> <pre><code> time f_t sites t_max t_step site r_x r_y site weight occupation site2 correlation 0 0.000000 0.000000 4 2 15 0 0 0 0 0 0 0 1 0.000000 0.000000 4 2 15 0 0 0 0 0 1 1 2 0.000000 0.000000 4 2 15 0 0 0 0 0 2 2 3 0.000000 0.000000 4 2 15 0 0 0 0 0 3 3 4 0.142857 -0.122449 4 2 15 0 0 0 0 4 0 16 5 0.142857 -0.122449 4 2 15 0 0 0 0 4 1 17 6 0.142857 -0.122449 4 2 15 0 0 0 0 4 2 18 .. ... ... ... ... ... ... ... ... ... ... ... ... 233 1.857143 1.591837 4 2 15 3 1 1 3 55 1 221 234 1.857143 1.591837 4 2 15 3 1 1 3 55 2 222 235 1.857143 1.591837 4 2 15 3 1 1 3 55 3 223 236 2.000000 2.000000 4 2 15 3 1 1 3 59 0 236 237 2.000000 2.000000 4 2 15 3 1 1 3 59 1 237 238 2.000000 2.000000 4 2 15 3 1 1 3 59 2 238 239 2.000000 2.000000 4 2 15 3 1 1 3 59 3 239 </code></pre> <p>By re-indexing the data, the hierarchy becomes a bit more apparent:</p> <pre><code>df_all.set_index(['t_max', 't_step', 'sites', 'time', 'site', 'site2'], inplace=True) </code></pre> <p>which results in</p> <pre><code> f_t r_x r_y site weight occupation correlation t_max t_step sites time site site2 2 15 4 0.000000 0 0 0.000000 0 0 0 0 0 1 0.000000 0 0 0 0 1 2 0.000000 0 0 0 0 2 3 0.000000 0 0 0 0 3 0.142857 0 0 -0.122449 0 0 0 4 16 1 -0.122449 0 0 0 4 17 2 -0.122449 0 0 0 4 18 ... ... ... ... ... ... ... 1.857143 3 1 1.591837 1 1 3 55 221 2 1.591837 1 1 3 55 222 3 1.591837 1 1 3 55 223 2.000000 3 0 2.000000 1 1 3 59 236 1 2.000000 1 1 3 59 237 2 2.000000 1 1 3 59 238 3 2.000000 1 1 3 59 239 </code></pre>
<p>I think you should do something like this, putting <code>df_parms</code> as your index. This way you can easily concat more frames with different parms.</p> <pre><code>In [67]: pd.set_option('max_rows',10) In [68]: dfx = df_all_but_parms.copy() </code></pre> <p>You need to assign the columns to the frame (you can also directly construct a multi-index, but this is starting from your data).</p> <pre><code>In [69]: for c in df_parms.columns: dfx[c] = df_parms.loc[0,c] In [70]: dfx Out[70]: time f_t site occupation site2 correlation r_x r_y site weight sites t_max t_step 0 0.000000 0.000000 0 0 0 0 0 0 0 4 2 15 1 0.000000 0.000000 0 0 1 1 0 0 0 4 2 15 2 0.000000 0.000000 0 0 2 2 0 0 0 4 2 15 3 0.000000 0.000000 0 0 3 3 0 0 0 4 2 15 4 0.142857 -0.122449 0 4 0 16 0 0 0 4 2 15 .. ... ... ... ... ... ... ... ... ... ... ... ... 235 1.857143 1.591837 3 55 3 223 1 1 3 4 2 15 236 2.000000 2.000000 3 59 0 236 1 1 3 4 2 15 237 2.000000 2.000000 3 59 1 237 1 1 3 4 2 15 238 2.000000 2.000000 3 59 2 238 1 1 3 4 2 15 239 2.000000 2.000000 3 59 3 239 1 1 3 4 2 15 [240 rows x 12 columns] </code></pre> <p>Set the index (this returns a new object)</p> <pre><code>In [71]: dfx.set_index(['sites','t_max','t_step']) Out[71]: time f_t site occupation site2 correlation r_x r_y site weight sites t_max t_step 4 2 15 0.000000 0.000000 0 0 0 0 0 0 0 15 0.000000 0.000000 0 0 1 1 0 0 0 15 0.000000 0.000000 0 0 2 2 0 0 0 15 0.000000 0.000000 0 0 3 3 0 0 0 15 0.142857 -0.122449 0 4 0 16 0 0 0 ... ... ... ... ... ... ... ... ... ... 15 1.857143 1.591837 3 55 3 223 1 1 3 15 2.000000 2.000000 3 59 0 236 1 1 3 15 2.000000 2.000000 3 59 1 237 1 1 3 15 2.000000 2.000000 3 59 2 238 1 1 3 15 2.000000 2.000000 3 59 3 239 1 1 3 [240 rows x 9 columns] </code></pre>
python|pandas|merge|dataframe
2
147
53,693,237
Search column for multiple strings but show faults Python Pandas
<p>I am searching a column in my data frame for a list of values contained in a CSV that I have converted to a list. Searching for those values is not the issue here. </p> <pre><code>import pandas as pd df = pd.read_csv('output2.csv') hos = pd.read_csv('houses.csv') parcelid_lst = hos['Parcel ID'].tolist() result = df.loc[df['PARID'].isin(parcelid_lst)] result </code></pre> <p>What I would like to do is once the list has been searched and the data frame is shown with the "found" values I would also like to print or display a list of the values from the list that were "unfound" or did not exist in the data frame column I was searching. </p> <p>Is there a specific method to call to do this? </p> <p>Thank you in advance!</p>
<p>After reconsidering my question and thinking about it a little bit differently, the solution I found is to turn all the values in the data frame in the 'PARID' column into a list. Then compare the 'parcelid_lst' to it. </p> <p>This resulted in a list of all the values that did not exist in the data frame but did exist in the 'parcelid_lst' </p> <pre><code>df = pd.read_csv('output2.csv') allparids = df['PARID'].tolist() hos = pd.read_csv('houses.csv') parcelid_lst = hos['Parcel ID'].tolist() list(set(parcelid_lst) - set(allparids)) </code></pre>
python|pandas
0
148
53,414,818
Python: How do I change a value in column A if another value in column B repeats itself?
<p>I have many excel files with the same columns in one folder. I need to browse each file and compare which values of the column "User Number" of one file are the same as the other file. And then manipulate another column named "Date" based on that. For exemple:</p> <pre><code>A2018_02_01 file has: User_Number Date 18732A 2017-06-22 27192B 2017-08-06 23872Z 2017-08-06 82716A 2017-09-18 77629B 2017-09-12 A2018_02_02 file has: User_Number Date 18732A 2017-06-22 27192B 2017-08-06 54321R 2017-12-11 23872Z 2017-11-04 18732A 2017-06-25 </code></pre> <p>So in this case I want the program to check for matches of User Number values and then, if the date - linked to this number - of one file is different from the date of the other file, I want to change both dates to be the oldest date.</p> <p>In this case I would have:</p> <pre><code>A2018_02_01 file has: User_Number Date 18732A 2017-06-22 27192B 2017-08-06 23872Z 2017-08-06 82716A 2017-09-18 77629B 2017-09-12 A2018_02_02 file has: User_Number Date 18732A 2017-06-22 27192B 2017-08-06 54321R 2017-12-11 23872Z 2017-08-06 18732A 2017-06-22 </code></pre> <p>I appended all the files:</p> <pre><code>import os import glob import pandas as pd path=r'C/.../files' files = os.listdir(path) df = pd.DataFrame() for f in glob.glob(path + "/*.xlsx"): data = pd.read_excel(f,header=2) df=df.append(data) df["Date"]=pd.to_datetime(df["Date"], errors='coerce') </code></pre> <p>The logic doesn't work like javascript logic, so I'm not sure how to do the condition. I've tried:</p> <pre><code>df_number = df["User Number"] for number in df[df_number.duplicated()]: number.df["Date"]number.df["Date"].min() </code></pre> <p>And other methods, but nothing works. Any help is appreciated.</p>
<p>My solution is to create a master mapper with all the min dates:</p> <pre><code>master=pd.concat([df1, df2]).groupby('User_Number').min() </code></pre> <p>and then join each dataframe to the master to find the adjusted date:</p> <pre><code>df1.join(master,rsuffix='_adj',on='User_Number')[['User_Number', 'Date_adj']]) df2.join(master,rsuffix='_adj',on='User_Number')[['User_Number', 'Date_adj']]) </code></pre> <p>Output:</p> <pre><code> User_Number Date_adj 0 18732A 2017-06-22 1 27192B 2017-08-06 2 23872Z 2017-08-06 3 82716A 2017-09-18 4 77629B 2017-09-12 User_Number Date_adj 0 18732A 2017-06-22 1 27192B 2017-08-06 2 54321R 2017-12-11 3 23872Z 2017-08-06 4 18732A 2017-06-22 </code></pre> <p>Adapting it to your code:</p> <pre><code>list_of_df = [] for f in glob.glob(path + "/*.xlsx"): data = pd.read_excel(f,header=2) list_of_df.append(data) df = pd.concat(list_of_df) df["Date"]=pd.to_datetime(df["Date"], errors='coerce') master=df.groupby('User_Number').min() for aux_df in list_of_df: aux_df['Date'] = aux_df.join(master,rsuffix='_adj',on='User_Number')[['Date_adj']]) </code></pre>
python|pandas|dataframe|series|glob
2
149
53,563,828
String processing in Python
<p>I have a text file from which I am trying to create a pandas DF</p> <pre><code>Name John Doe Country Wakanda Month of birth January 1900 social status married .... </code></pre> <p>After every 4 lines a new record similar to that is present. The structure of data frame I am trying to create it</p> <pre><code> Name Country . Month of Birth . social status 0 . John Doe . Wakanda January 1900 married </code></pre> <p>Current Approach:</p> <p>I am using a very inefficient iterative approach to extract the records as list of lists, where each list is a row for the DF.</p> <p>Is there a better pythonic approach to separate the column names from the values, and extract the values alone. </p> <p>PS. I am not asking for code. Any suggestion on the approach would be great. </p>
<p>Perhaps an approach could be to have a list of potential matches for each of the entries, and for each entry iterate through this list and strip the key words in the case of a match.</p> <p>As an example for an individual entry:</p> <pre><code>text = 'Month of birth January 1900' keys = ['Month of birth', 'Date of birth' 'Birth'] </code></pre> <p>When you look for the matches, an option would be to select the shortest string from the list, meaning that more words have matched:</p> <pre><code>min([text.strip(x) for x in keys]) 'January 1900' </code></pre> <p>You simply follow this approach for the different fields and build a dataframe from the resulting strings. You could also consider stemming the strings before searching in the keywords. Hope this helps.</p>
python|pandas
0
150
53,447,500
How do I initialise the plot of my function to start at 0?
<p>This is probably a really stupid question, but I just can't seem to work out how to do it for some reason. I've created a function for a random walk here which just uses the numpy binomial function with one trial (ie if it's under 0.5 it's -1, over it's +1. However this obviously makes the first value of the function either -1 or +1, whereas I want it to be zero. How can I change this?</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def random_walk(N,d): walk = d*np.cumsum(2*np.random.binomial(1,.5,N-1)-1) return walk plt.plot(np.arange(1,250),random_walk(250,1)) plt.show() </code></pre> <p>Again, it's probably really simple and I'm being really stupid, but I'd really appreciate the help!</p>
<p>Little tweak to your code. Hopefully this is what you're looking for.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def random_walk(N,d): walk = np.concatenate(([0],np.cumsum(2*np.random.binomial(1,.5,N-1)-1))) return walk plt.plot(np.arange(0,250),random_walk(250,1)) plt.show() </code></pre>
python|numpy|matplotlib|random-walk
0
151
15,951,488
plotting dendrograms with scipy in Python
<p>The scipy <code>dendrogram</code> documentation says:</p> <pre><code>dendrogram(Z, ...) The dendrogram illustrates how each cluster is composed by drawing a U-shaped link between a non-singleton cluster and its children. ...It is expected that the distances in Z[:,2] be monotonic, otherwise crossings appear in the dendrogram. </code></pre> <p>I'm unclear about the sentence about "It is expected that the distances in Z[:,2] be monotonic, otherwise crossings appear in the dendrogram"? What crossing in the diagram is referred to? Can someone please show an example where this happens for a particular distance matrix with an explanation of why?</p> <p>Is this an example of a crossing? seems to me this arises just by some symmetries in distance matrix... <img src="https://i.stack.imgur.com/wKU2a.png" alt="enter image description here"></p>
<p>Z is supposed to specify merges of clusters (which 2 clusters are merged) and the "time" they happen, where "time" is the y-axis of the dendrogram (this is what they mean by distances). Z is usually constructed so that "time" is in increasing order, which also makes it easy to plot so that U shapes are not on top of each other. If you plot the U's in a different order, they may overlap each other and it will looked messed up - that is what is referred to as crossings.</p> <p>I ran a short example, this is an example of what a crossing will look like:</p> <p><img src="https://i.stack.imgur.com/8mmmJ.png" alt="enter image description here"></p> <p>Bottom line: stick with the correct order.</p>
python|numpy|scipy
1
152
16,729,574
How can I get a value from a cell of a dataframe?
<p>I have constructed a condition that extracts exactly one row from my data frame:</p> <pre><code>d2 = df[(df['l_ext']==l_ext) &amp; (df['item']==item) &amp; (df['wn']==wn) &amp; (df['wd']==1)] </code></pre> <p>Now I would like to take a value from a particular column:</p> <pre><code>val = d2['col_name'] </code></pre> <p>But as a result, I get a data frame that contains one row and one column (i.e., one cell). It is not what I need. I need one value (one float number). How can I do it in pandas?</p>
<p>If you have a DataFrame with only one row, then access the first (only) row as a Series using <em><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="noreferrer">iloc</a></em>, and then the value using the column name:</p> <pre><code>In [3]: sub_df Out[3]: A B 2 -0.133653 -0.030854 In [4]: sub_df.iloc[0] Out[4]: A -0.133653 B -0.030854 Name: 2, dtype: float64 In [5]: sub_df.iloc[0]['A'] Out[5]: -0.13365288513107493 </code></pre>
python|pandas|dataframe
731
153
22,326,882
Construct single numpy array from smaller arrays of different sizes
<p>I have an array of values, <i>x</i>. Given 'start' and 'stop' indices, I need to construct an array <i>y</i> using sub-arrays of <i>x</i>.</p> <pre><code>import numpy as np x = np.arange(20) start = np.array([2, 8, 15]) stop = np.array([5, 10, 20]) nsubarray = len(start) </code></pre> <p>Where I would like <i>y</i> to be:</p> <pre><code>y = array([ 2, 3, 4, 8, 9, 15, 16, 17, 18, 19]) </code></pre> <p>(In practice the arrays I am using are much larger).</p> <p>One way to construct <i>y</i> is using a list comprehension, but the list needs to be flattened afterwards:</p> <pre><code>import itertools as it y = [x[start[i]:stop[i]] for i in range(nsubarray)] y = np.fromiter(it.chain.from_iterable(y), dtype=int) </code></pre> <p>I found that it is actually faster to use a for-loop:</p> <pre><code>y = np.empty(sum(stop - start), dtype = int) a = 0 for i in range(nsubarray): b = a + stop[i] - start[i] y[a:b] = x[start[i]:stop[i]] a = b </code></pre> <p>I was wondering if anyone knows of a way that I can optimize this? Thank you very much!</p> <p><strong>EDIT</strong></p> <p>The following tests all of the times:</p> <pre><code>import numpy as np import numpy.random as rd import itertools as it def get_chunks(arr, start, stop): rng = stop - start rng = rng[rng!=0] #Need to add this in case of zero sized ranges np.cumsum(rng, out=rng) inds = np.ones(rng[-1], dtype=np.int) inds[rng[:-1]] = start[1:]-stop[:-1]+1 inds[0] = start[0] np.cumsum(inds, out=inds) return np.take(arr, inds) def for_loop(arr, start, stop): y = np.empty(sum(stop - start), dtype = int) a = 0 for i in range(nsubarray): b = a + stop[i] - start[i] y[a:b] = arr[start[i]:stop[i]] a = b return y xmax = 1E6 nsubarray = 100000 x = np.arange(xmax) start = rd.randint(0, xmax - 10, nsubarray) stop = start + 10 </code></pre> <p>Which results in:</p> <pre><code>In [379]: %timeit np.hstack([x[i:j] for i,j in it.izip(start, stop)]) 1 loops, best of 3: 410 ms per loop In [380]: %timeit for_loop(x, start, stop) 1 loops, best of 3: 281 ms per loop In [381]: %timeit np.concatenate([x[i:j] for i,j in it.izip(start, stop)]) 10 loops, best of 3: 97.8 ms per loop In [382]: %timeit get_chunks(x, start, stop) 100 loops, best of 3: 16.6 ms per loop </code></pre>
<p>This is a bit complicated, but quite fast. Basically what we do is create the index list based off vector addition and the use <code>np.take</code> instead of any python loops:</p> <pre><code>def get_chunks(arr, start, stop): rng = stop - start rng = rng[rng!=0] #Need to add this in case of zero sized ranges np.cumsum(rng, out=rng) inds = np.ones(rng[-1], dtype=np.int) inds[rng[:-1]] = start[1:]-stop[:-1]+1 inds[0] = start[0] np.cumsum(inds, out=inds) return np.take(arr, inds) </code></pre> <p>Check that it is returning the correct result:</p> <pre><code>xmax = 1E6 nsubarray = 100000 x = np.arange(xmax) start = np.random.randint(0, xmax - 10, nsubarray) stop = start + np.random.randint(1, 10, nsubarray) old = np.concatenate([x[b:e] for b, e in izip(start, stop)]) new = get_chunks(x, start, stop) np.allclose(old,new) True </code></pre> <p>Some timings:</p> <pre><code>%timeit np.hstack([x[i:j] for i,j in zip(start, stop)]) 1 loops, best of 3: 354 ms per loop %timeit np.concatenate([x[b:e] for b, e in izip(start, stop)]) 10 loops, best of 3: 119 ms per loop %timeit get_chunks(x, start, stop) 100 loops, best of 3: 7.59 ms per loop </code></pre>
python|arrays|optimization|numpy
3
154
22,067,325
What type of object does each column contain: getting more detail than dtypes
<p>I often find myself changing the types of data in columns of my dataframes, converting between datetime and timedelta types, or string and time etc. So I need a way to check which data type each of my columns has. </p> <p>df.dtypes is fine for numeric object types, but for everything else just shows 'object'. So how can I find out what kind of object?</p>
<p>You can inspect one of the cells to find the type.</p> <pre><code>import pandas as pd #assume some kind of string and int data records = [["a",1], ["b",2]] df = pd.DataFrame(records) df.dtypes &gt;0 object &gt;1 int64 &gt;dtype: object </code></pre> <p>So pandas knows that column 1 is integer storage but column zero is shown as object.</p> <pre><code>df[0].dtype &gt;dtype('O') </code></pre> <p>This still shows "Object" storage.</p> <pre><code>type(df[0][0]) &gt;str </code></pre> <p>Voila.</p> <p>Of course, this depends on your exact data structure. If you've got NaNs anywhere in the column then it sometimes plays havoc with the converted type (havoc as in its not always clear why it ends up as object storage).</p>
pandas
1
155
17,776,075
Numpy: Is it possible to display numbers in comma-separated form, like 1,000,000?
<p>I have a numpy array like this:</p> <pre><code>[ 1024 303 392 4847 7628 6303 8898 10546 11290 12489 19262 18710 20735 24553 24577 28010 31608 32196 32500 32809 37077 37647 44153 46045 47562 48642 50134 50030 52700 52628 51720 53844 56640 56856 57945 58639 57997 63326 64145 65734 67148 68086 68779 68697 70132 71014 72830 77288 77502 77537 78042 79623 81151 81584 81426 84030 86879 86171 89771 88367 90440 92640 93369 93818 97085 98787 98867 100471 101473 101788 102828 104558 105144 107242 107970 109785 111856 111643 113011 113454 116367 117602 117507 120910 121167 122150 123385 123079 125537 124702 130226 130943 133885 134308 133947 136145 137959 137894 142173 141912 142763 142003 145996 145402 146866 146092 147825 148910 147713 149825 151061 153643 154977 156978 158904 157954 160316 161523 163053 165434 167300 167715 166813 168388 170351 171987 172649 177108 178431 178943 179275 178842 182607 182385 184192 186173 184352 188252 190069 190973 193528 194201 193948 195272 196028 196983 197949 200612 200036 202586 203816 204169 204442 206565 204978 207570 208841 209840 211022 215287 216581 218759 219129 219654 221390 223196 222838 226258 227427 228720 228618 229596 230456 232478 234833 235885 236174 240016 241327 240405 245089 246395 246427 248713 250445 251459 250243 250142 252208 255305 257085 259021 261418 260371 262429 266987 268073 267347 267778 272190 274298 276432 278301 278566 281415 286693 286916 290180 291991 293615 294196 294794 295632 295801 296841 297921 297851 298639 299947 300571 303739 305400 305893 308541 308502 309391 311174 313150 313809 313503 314947 314267 316401 315598 315667 319040 318215 322159 322351 326841 329272 329970 331086 330680 333592 335304 338395 339535 338490 340901 340224 342214 344058 344265 345342 348066 349170 351184 351246 350390 352825 354106 353678 355172 356021 356572 358199 358499 360150 359673 361638 361261 364342 363712 363537 364614 365605 370378 369543 372492 371458 374351 374062 377692 380780 383285 386580 391073 390572 390016 390071 391357 391443 393495 395623 396069 398131 397323 397600 401621 402409 402653 402565 404011 406677 408032 412484 412818 414683 415563 416881 417693 418979 421372 422183 424204 428040 433048 436204 441467 441364 444357 445020 446317 447746 450215 452156 452459 453675 455563 458602 457832 459647 459422 460776 462066 463088 464990 465594 465412 467838 470474 469814 472107 471190 474962 473129 475885 476326 477163 477549 480703 482112 483272 485919 489819 493653 494763 497317 499973 501417 502259 505029 505738 506419 505987 509523 510927 511615 510642 512194 514167 515398 515899 514871 516935 517935 518745 520151 523230 522624 524360 527499 527713 529840 533364 533427 535012 535626 536789 538309 539294 541628 543409 543257 547659 548805 547957 549206 550418 551496 553944 554964 556040 555442 556115 558035 559012 559996 560687 561125 562147 561847 564313 565764 566978 568285 571312 570638 573771 575404 576862 576623 578010 581445 581721 582612 583485 584905 584490 587062 588413 590182 590895 592253 593207 592167 592778 594918 595386 595313 596638 599286 600967 600104 603553 603062 604840 605574 608996 608342 609718 613394 616706 620509 620742 623473 627696 628046 630422 629559 631104 632706 631853 631558 634244 633644 635318 637530 639561 639621 640990 642450 644077 646093 646231 645289 648794 650183 651224 650614 652121 653160 653916 654878 653366 656464 656765 659205 660318 661160 661733 664133 666687 666141 667800 670065 669697 673198 674909 679237 678841 680237 681066 683609 683774 687714 688250 688348 688409 690934 691247 690561 692331 694604 692233 694565 697065 696502 699490 698759 704335 704495 707785 710077 708889 711285 712660 713194 713032 715592 716780 717421 719728 718980 720024 721276 722931 721172 723217 724522 725116 726530 727363 728557 729932 730517 731753 733026 733901 734254 733754 735812 737422 738840 741603 743077 742448 744012 746448 747913 748561 750163 750220 751494 751775 755024 754450 756719 759364 758661 760435 762363 764661 764426 765811 767944 769395 768974 769107 768022 771572 773970 773237 774987 778125 779134 778529 779513 782699 784062 785550 785809 787398 787119 787461 792378 793407 795447 798216 798111 800309 800055 799506 803787 805761 807160 807536 807857 813805 814900 815323 815944 818673 820553 821977 823213 824189 823973 825921 828105 830929 829447 832115 835241 837169 837700 838019 840313 843718 843404 845531 844335 847409 847815 851908 850225 850872 854830 856319 858022 857802 858226 859101 859043 860669 862139 861543 862198 862803 863898 864713 865809 864774 867725 869353 870102 869142 870530 872039 873075 875093 875414 877401 879831 879495 883657 884148 886047 887192 889179 890189 891934 893670 894898 895035 898261 899598 902954 901823 903611 903217 905733 912109 912091 912521 917189 917015 919413 925544 925781 927930 932198 933415 932876 935606 937908 936373 939053 938267 942618 942942 945486 946063 947948 949637 950909 951936 950532 953508 955364 957614 960969 960691 961469 962765 964259 963586 966093 965046 965127 967633 968184 970414 970205 969498 970840 973759 975468 978685 979028 981163 983389 983760 984413 985753 985216 988019 989954 990162 990550 989592 992109 993933 992624 994876 993699 995831 999375 997783 999865 1002329 1003334 1001654 1005034 1006174 1010507 1011310 1012061 1012837 1014860 1015783 1019508 1022806 1023917 1024263 1027503 1026829 1028601 1031338 1032535 1033546 1033509 1036241 1036561 1038885 1041618 1043533 1045119 1047536 1046882 1048368 1047887 2047 392 4847 7628 6303 8898 10353 11290 10546 12489 18710 20735 20539 24577 27082 28010 29933 31608 32196 32500 32809 37077 37647 44153 46045 46597 48642 49825 50134 52628 53055 53844 56856 55986 57945 57997 58639 64145 65734 68086 68779 71014 71323 72830 77502 77537 77288 79869 78574 80348 79623 81584 81151 84030 86879 87747 89771 88367 90440 92640 93818 97085 98787 100471 101250 102186 101473 101788 102828 106089 105701 107242 107970 109785 112702 113454 116367 117602 118378 117507 120224 120910 122150 123385 123825 130943 130123 133885 134308 133543 136145 137959 137894 139657 141912 142173 142003 145122 145402 146866 146092 147825 148910 151061 153643 154977 157613 158904 157954 162844 164345 165668 165434 167300 166813 168388 170351 170859 171600 172649 177108 178431 179304 178842 182385 182607 183418 184352 188252 190069 190973 193528 193948 194201 195272 196028 196983 197949 200612 200036 203193 204442 203816 204978 206565 207607 207570 208841 211579 211022 215287 216581 219129 218759 219654 222196 223196 221999 226258 227427 228720 229596 230047 230456 232478 234623 234833 234131 235885 236174 240230 240498 240016 241327 240405 242923 246395 246427 248173 250445 251459 250243 255305 255883 257085 259021 260371 261951 262195 264831 266987 268073 267778 272190 274298 276432 279860 278566 281415 286693 286916 286991 289070 291991 293615 293898 294168 295801 295050 297921 298639 298449 299179 303739 305893 307647 309098 309391 313150 313809 313503 314947 314267 316401 315598 317854 322159 322351 326841 326917 329272 329631 329970 331086 330680 333592 335304 338395 339535 338490 340901 340224 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] </code></pre> <p>I would like to see those numbers in comma-separated values, i.e.: 204,978 instead of 204978. Is this feasible within numpy/scipy?</p>
<p>If you want to apply your printing preference globally; you could use <code>numpy.set_printoptions()</code>:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.array([338490, 340901, 340224]) &gt;&gt;&gt; a array([338490, 340901, 340224]) &gt;&gt;&gt; np.set_printoptions(formatter={'int_kind': '{:,}'.format}) &gt;&gt;&gt; a array([338,490, 340,901, 340,224]) &gt;&gt;&gt; print(a) [338,490 340,901 340,224] </code></pre>
python|arrays|numpy|scipy|number-formatting
5
156
55,423,683
Conditionally change year in DOB
<p>I have a 'date' column which I cleaned to change all dates to the same format (date/month/year).</p> <p>Since originally some dates ended with the year being two digits eg. <code>2/7/95</code>, they got converted to <code>02/07/2095</code>. However, I need to change the year of those dates that are 21st century, to 20th century, so <code>20yy</code> -> <code>19yy</code>. </p> <p>This is my function at the moment:</p> <pre><code>df['date'] = pd.to_datetime(df['date']).dt.strftime('%d/%m/%Y') </code></pre> <p>Input -> Function output -> Expected output:</p> <pre><code> 07/12/02 -&gt; 07/12/2002 -&gt; 07/12/1902 07-Sep-09 -&gt; 07/09/2019 -&gt; 07/09/1919 </code></pre> <p>How do I:</p> <ul> <li>Extract the Year section after function</li> <li>Check whether it needs to be changed <ul> <li>Change Year if yes</li> </ul></li> </ul> <p>I've tried this: </p> <pre><code>year= pd.DatetimeIndex(df['date']).year if year.any() &gt; 2000: subset['date']= pd.Timedelta(pd.offsets.year(1000)) </code></pre>
<p><code>dt.strftime</code> converts datetime to other formats, but then dtype of column will be object (string). </p> <pre><code>df['date'] = pd.to_datetime(df['date']).apply(lambda x: x - pd.DateOffset(years=100) if x.year &gt;= 2000 else x) </code></pre> <p>If you want the same datetime formatting again the use</p> <pre><code>df['date'] = pd.to_datetime(df['date']).apply(lambda x: x - pd.DateOffset(years=100) if x.year &gt;= 2000 else x).dt.strftime('%d/%m/%Y') </code></pre>
python|pandas|datetime
0
157
55,517,960
How do I find the minimum of a numpy matrix? (In this particular case)
<p>I have a numpy matrix as follows <br></p> <pre><code>[['- A B C D E'] ['A 0 2 3 4 5'] ['B 2 0 3 4 5'] ['C 3 3 0 4 5'] ['D 4 4 4 0 5'] ['E 5 5 5 5 0']] </code></pre> <p>How do I find the <strong>minimum</strong> in this matrix along with the <strong>index</strong> of this minimum, <strong>excluding</strong> all of the zeros when considering the minimum?</p> <p>I tried several methods I saw online, but I would almost always get the following error: <code>TypeError: cannot perform reduce with flexible type</code></p> <p><br>I would appreciate any new solutions that I can try and check if it works?</p>
<p>You need to go back to the drawing board with your 'numpy' matrix, that is not an matrix, but a list of list of (single) string.</p> <pre><code>x =['- A B C D E', 'A 0 2 3 4 5', 'B 2 0 3 4 5', 'C 3 3 0 4 5', 'D 4 4 4 0 5', 'E 5 5 5 5 0'] # Preprocess this matrix to make it a matrix x = [e.split() for e in x] numbers = set("0123456789") xr = [[float(e) if all(c in numbers for c in e) and e != "0" else float("inf") for e in l] for l in x] </code></pre> <p>Everything that's not a number or 0 is marked as float(inf) to not get into the way of minimum calculation:</p> <pre><code>[[inf, inf, inf, inf, inf, inf], [inf, inf, 2.0, 3.0, 4.0, 5.0], [inf, 2.0, inf, 3.0, 4.0, 5.0], [inf, 3.0, 3.0, inf, 4.0, 5.0], [inf, 4.0, 4.0, 4.0, inf, 5.0], [inf, 5.0, 5.0, 5.0, 5.0, inf]] </code></pre> <p>You can then easily use numpy's <code>argmin</code> and <code>unravel_index</code> to get what you want.</p> <pre><code>xrn = np.array(xr) index = np.unravel_index(np.argmin(xrn), xrn.shape) # RESULT: (1, 2) </code></pre>
python|numpy|matrix
2
158
56,537,706
I want to sort a dataframe based on the difference of two rows of a single column
<p>I have a dataframe.</p> <pre><code> Item Type Year_Month Total Cost Cereal Jul-2017 6000 Cereal Jun-2017 5000 Baby Food Jul-2017 3000 Baby Food Jun-2017 2900 Snacks Jul-2017 4500 Snacks Jun-2017 4000 </code></pre> <p>I wnat to sort the dataframe according to the difference of two rows of a single column. For example For Cereal the difference is 6000-5000 =1000 and for Snacks the difference is 4500-4000 = 500 and for baby food the difference is 3000- 2900 = 100</p> <p>So the output should be like</p> <pre><code> Item Type Year_Month Total Cost Cereal Jul-2017 6000 Cereal Jun-2017 5000 Snacks Jul-2017 4500 Snacks Jun-2017 4000 Baby Food Jul-2017 3000 Baby Food Jun-2017 2900 </code></pre>
<p>First you need to calculate the differences for each item type. One of the ways, how to do this with pandas would be to use pivot_tables. Here you tell it which dataframe (df), based on which columns to calculate (values="TotalCost"), what function to use to calculate it (aggfunc=np.diff) and how to group them (index=["ItemType"]).</p> <pre><code>diff = pandas.pivot_table(df, values="TotalCost", index=["ItemType"], aggfunc=np.diff) </code></pre> <p>You case above only have 2 possible months. If you had more than two, then np.diff would give you values in a list. In this case you have two options. Either you filter the data frame, so there are only two months in it. This can be done like this:</p> <pre><code>df = df[[a or b for a, b in zip(df["Year_Month"] == "Jul-2017", df["Year_Month"] == "Jun-2017")]] </code></pre> <p>The other option is that you calculate the mean difference in months. This can be done with the following function, which you would then replace np.diff with:</p> <pre><code>def mean_diff(l): return np.mean(np.diff(l)) </code></pre> <p>Then you can use this to calculate the difference for each element:</p> <pre><code>df["Diff"] = [float(diff.loc[d]) for d in df["ItemType"]] </code></pre> <p>After that, you just sort by the difference (and then by item, in case there are multiple items with the same difference)</p> <pre><code>df.sort_values(by=["Diff", "ItemType", "Year_Month"]).drop(columns = 'Diff') </code></pre>
python|pandas|sorting|dataframe
3
159
56,769,787
Find distinct values in a column if the dataframe containts list in columns
<p>lets assume we have the following dataframe:</p> <pre><code>d = {'col1': [[1,2], [1,2], [2,1]], 'col2': ['A', 'B', 'C']} df = pd.DataFrame(data=d) df col1 col2 [1, 2] A [1, 2] B [2, 1] C </code></pre> <p>Where I have a list in a column in the dataframe, how can I count the distinct values in each column? The function <code>df.nunique()</code>is not working it gives this error: <code>TypeError: ("unhashable type: 'list'", 'occurred at index :97A::SAFE')</code></p> <p>The expected output would be:</p> <pre><code>col1 2 col2 3 </code></pre> <p>I need a solution which is appliciable over more columns, my original dataframe will have several columns and I will not know which one contains a list and which one not.</p>
<p>For the column containing lists, you can map the values to <code>tuples</code>, <em>which are hashable</em>, and then use <code>nunique</code>:</p> <pre><code>df.col1.map(tuple).nunique() # 2 </code></pre> <hr> <pre><code>df['col1'] = df.col1.map(tuple) df.nunique() col1 2 col2 3 dtype: int64 </code></pre> <hr> <p>If you do not know which columns might contain lists:</p> <pre><code>df.applymap(tuple).nunique() col1 2 col2 3 dtype: int64 </code></pre> <p>Or checking specifically which columns contain lists:</p> <pre><code>cols = [i for i, ix in enumerate(df.loc[0].values) if isinstance(ix, list)] df.iloc[:,cols] = df.iloc[:,cols].applymap(tuple) df.nunique() </code></pre>
python|pandas
3
160
56,795,642
The performance of GPU still slow even by keras fit_generator method
<p>I have a large dataset 5GB that I want to use for training a neural network model designed using Keras. Although I am using Nvidia Tesla P100 GPU, the training is really slow (each epoch takes ~ 60-70s) (I choose the <code>batch size=10000</code>). After reading and searching, I found out that I can improve the training speed by using keras <a href="https://keras.io/models/sequential/#fit_generator" rel="nofollow noreferrer">fit_generator</a> instead of the typical <code>fit</code>. To do so, I coded the following:</p> <pre><code>from __future__ import print_function import numpy as np from keras import Sequential from keras.layers import Dense import keras from sklearn.model_selection import train_test_split def generator(C, r, batch_size): samples_per_epoch = C.shape[0] number_of_batches = samples_per_epoch / batch_size counter = 0 while 1: X_batch = np.array(C[batch_size * counter:batch_size * (counter + 1)]) y_batch = np.array(r[batch_size * counter:batch_size * (counter + 1)]) counter += 1 yield X_batch, y_batch # restart counter to yeild data in the next epoch as well if counter &gt;= number_of_batches: counter = 0 if __name__ == "__main__": X, y = readDatasetFromFile() X_tr, X_ts, y_tr, y_ts = train_test_split(X, y, test_size=.2) model = Sequential() model.add(Dense(16, input_dim=X.shape[1])) model.add(keras.layers.advanced_activations.PReLU()) model.add(Dense(16)) model.add(keras.layers.advanced_activations.PReLU()) model.add(Dense(16)) model.add(keras.layers.advanced_activations.PReLU()) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) batch_size = 1000 model.fit_generator(generator(X_tr, y_tr, batch_size), epochs=200, steps_per_epoch=X.shape[0]/ batch_size, validation_data=generator(X_ts, y_ts, batch_size * 2), validation_steps=X.shape[0] / batch_size * 2, verbose=2, use_multiprocessing=True) loss, accuracy = model.evaluate(X_ts, y_ts, verbose=0) print(loss, accuracy) </code></pre> <p>After running with <code>fit_generator</code>, the training time improved a little bit but it is still slow (each epoch now takes ~ 40-50s). When running <code>nvidia-smi</code> in the terminal, I found out that GPU utilization is ~15% only which makes me wonder if my code is wrong. I am posting my code above to kindly ask you if there is a bug causing to slow the performance of GPU.</p> <p>Thank you,</p>
<p>Just try assigning GPUs forcefully,</p> <pre><code>import os os.environ["CUDA_VISIBLE_DEVICES"]="0" # or if you want more than 1 GPU set it as "0", "1" </code></pre> <p>Hope this helps!</p>
python|tensorflow|keras
1
161
56,588,093
Python, converting int to str, trailing/leading decimal/zeros
<p>I convert my dataframe values to str, but when I concatenate them together the previous ints are including trailing decimals.</p> <pre><code>df["newcol"] = df['columna'].map(str) + '_' + df['columnb'].map(str) + '_' + df['columnc'].map(str) </code></pre> <p>This is giving me output like <code>500.0</code> how can I get rid of this leading/trailing decimal? <strong>sometimes my data in column a will have non alpha numeric character</strong>s.</p> <pre><code>+---------+---------+---------+------------------+----------------------+ | columna | columnb | columnc | expected | currently getting | +---------+---------+---------+------------------+----------------------+ | | -1 | 27 | _-1_27 | _-1.0_27.0 | | | -1 | 42 | _-1_42 | _-1.0_42.0 | | | -1 | 67 | _-1_67 | _-1.0_67.0 | | | -1 | 95 | _-1_95 | _-1.0_95.0 | | 91_CCMS | 14638 | 91 | 91_CCMS_14638_91 | 91_CCMS_14638.0_91.0 | | DIP96 | 1502 | 96 | DIP96_1502_96 | DIP96_1502.0_96.0 | | 106 | 11694 | 106 | 106_11694_106 | 00106_11694.0_106.0 | +---------+---------+---------+------------------+----------------------+ </code></pre> <p>Error:</p> <p><code>invalid literal for int() with base 10: ''</code></p>
<p><strong>Edit</strong>:<br> If your <code>df</code> has more than 3 columns, and you want to join only 3 columns, you may specify those columns in the command using columns slicing. Assume your <code>df</code> has 5 columns named as : <code>AA</code>, <code>BB</code>, <code>CC</code>, <code>DD</code>, <code>EE</code>. You want only joining columns <code>CC</code>, <code>DD</code>, <code>EE</code>. You just need to specify those 3 columns before the <code>fillna</code>, and assign the result to <code>newcol</code> as you want: </p> <pre><code>df["newcol"] = df[['CC', 'DD', 'EE']].fillna('') \ .applymap(lambda x: x if isinstance(x, str) else str(int(x))).agg('_'.join, axis=1) </code></pre> <p><em>Note: I just break command into 2 lines using <code>'\'</code> for easy reading.</em> </p> <hr> <p><strong>Original</strong>:<br> I guess your real data of <code>columna</code> <code>columnb</code> <code>columnc</code> contain <code>str</code>, <code>float</code>, <code>int</code>, empty space, blank space, and maybe even <code>NaN</code>. </p> <p><code>Float</code> with decimal values = .00 in a column dtype <code>object</code> will show without decimal. </p> <p>Assume your <code>df</code> has only 3 columns: <code>colmna</code>, <code>columnb</code>, <code>columnc</code> as you said. Using command below will handle: <code>str</code>, <code>float</code>, <code>int</code>, <code>NaN</code> and joining 3 columns into one as you want:</p> <pre><code>df.fillna('').applymap(lambda x: x if isinstance(x, str) else str(int(x))).agg('_'.join, axis=1) </code></pre> <hr> <p>I created a sample similar as yours</p> <pre><code> columna columnb columnc 0 -1 27 1 NaN -1 42 2 -1 67 3 -1 95 4 91_CCMS 14638 91 5 DIP96 96 6 106 11694 106 </code></pre> <p>Using your command returns the concatenated string having '.0' as you described</p> <pre><code>df['columna'].map(str) + '_' + df['columnb'].map(str) + '_' + df['columnc'].map(str) Out[1926]: 0 _-1.0_27.0 1 nan_-1.0_42.0 2 _-1.0_67.0 3 _-1.0_95.0 4 91_CCMS_14638_91 5 DIP96__96 6 106_11694_106 dtype: object </code></pre> <p>Using my command:</p> <pre><code>df.fillna('').applymap(lambda x: x if isinstance(x, str) else str(int(x))).agg('_'.join, axis=1) Out[1927]: 0 _-1_27 1 _-1_42 2 _-1_67 3 _-1_95 4 91_CCMS_14638_91 5 DIP96__96 6 106_11694_106 dtype: object </code></pre>
python-3.x|pandas
1
162
25,997,532
swig with openmp and python, does swig -threads need extra GIL handling?
<p>I have my C library interfaced with swig. I can compile it with my setup.py. Here the extension section:</p> <pre><code>surf_int_lib = Extension("_surf_int_lib", ["surf_int_lib.i", "surf_int_lib.c"], include_dirs=[numpy_include], extra_compile_args=["-fopenmp"], extra_link_args=['-lgomp'], swig_opts=['-threads'] ) </code></pre> <p>In my library I use openmp for parallelization. When I call my routines, I get the correct number of threads but they all suffer from GIL and are run concurrently. My routines give me the correct output. I was under the impression that <code>swig -threads</code> would release GIL when entering the library. So why do my functions not parallelize? </p> <p>Here is an example of an openmp routine:</p> <pre><code>void gegenbauerval(double *x, int nx, double *cs, int ncs, double alpha, double *f, int nf) { int j; #pragma omp parallel for default(shared) private(j) for(j=0;j&lt;nx;++j){ f[j] = gegenbauerval_pt(x[j],cs,ncs, alpha); } } </code></pre> <p>My interface file does not include any <code>%threads</code> or <code>Py_BEGIN_ALLOW_THREADS</code> calls. Do I need to release GIL and if so, how would I do that?</p> <p><strong>Update:</strong> I have numpy with openblas installed in a virtualenv, which I use for my calculations. It is the exact same python interpreter as without virtualenv. If I run following onliner with activated environment, it is not parallelized. However, if I run it with the standard installation, it works. So I am no longer sure what the real error is.</p> <pre><code>python -c "import surf_int.lib.surf_int_lib as slib;import numpy as np;a=np.random.randn(1e8);c=np.random.rand(23);x=slib.gegenbauerval(a,c,1.5); print x" </code></pre>
<p>After further investigation, I found that this is an issue between openmp and openblas (at least version 0.2.8).</p> <p>After recompiling openblas 0.2.11 with option <code>USE_OPENMP=1</code>, both blas routines from numpy as well as my own extensions using openmp make use of all cpus, set by the environment variable <code>OMP_NUM_THREADS</code>.</p> <p>The issue is maybe related to <a href="https://github.com/xianyi/OpenBLAS/issues/294" rel="nofollow">this bug report</a> and or the changelog entry of <a href="https://github.com/xianyi/OpenBLAS/releases" rel="nofollow">openblas 0.2.9.rc2</a>.</p>
python|numpy|openmp|openblas
0
163
66,777,021
how to use the models under tensorflow/models/research/object_detection/models
<p>I'm looking into training an object detection network using tensorflow, and i had a look at the TF2 model zoo. I noticed there are noticeably less models there than in the directory /models/research/models/, including the mobiledet with ssdlite developed for the jetson xavier.</p> <p>to clarify, the readme says that there is a mobildet gpu with ssdlite, and that model and checkpoints trained on COCO are provided, yet i couldn't find them anywhere in the repo</p> <p>How is one supposed to use those models?</p> <p>I already have a custom-trained mobilenetv3 for image classification, and i was hoping to see a way to turn the network into an object detection network, in accordance to the mobilenetv3 paper. If this is not straightforward, training one network from scratch could be ok too, i just need to know where to even start from</p>
<p>If you plan to use the object detection API, you can't use your existing model. You have to choose from a list of models <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">here</a> for v2 and <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md" rel="nofollow noreferrer">here</a> for v1</p> <p>The documentation is very well maintained and the steps to train or validate or run inference (test) on custom data is very well explained <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/" rel="nofollow noreferrer">here</a> by the TensorFlow team. The link is meant for TensorFlow version v2. However, if you wish to use v1, the process is fairly similar and there are numerous blogs/videos explaining how to go about it</p>
tensorflow|object-detection|object-detection-api|tensorflow-model-garden
0
164
67,129,554
ImportError with keras.preprocessing
<p>I am following a <a href="https://www.tensorflow.org/tutorials/images/classification#predict_on_new_data" rel="nofollow noreferrer">image classification tutorial at Tensorflow</a>. On running the following code-</p> <pre><code>import PIL import tensorflow as tf from tensorflow import keras sunflower_url = &quot;https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg&quot; sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url) img = keras.preprocessing.image.load_img(sunflower_path, target_size=(180, 180)) </code></pre> <p>I receive the following error at the last line.</p> <pre><code>ImportError: Could not import PIL.Image. The use of `load_img` requires PIL. </code></pre> <p>How can I fix the above issue?</p> <p>Kindly note that I have pillow installed on my conda working environment (python=3.8, Tensorflow=2.3).</p>
<p>The error says that you don't have <code>pillow</code> installed on your machine. If you're using conda, then you have to do</p> <pre><code>conda install pillow </code></pre> <p>If you're not using conda, then I would just try</p> <pre><code>pip install pillow </code></pre> <p><strong>Edit 1</strong>: In case you have PIL installed in a conda env already, then try</p> <pre><code>conda uninstall PIL conda install Pillow </code></pre> <p><strong>Edit 2</strong>: In case you may have an older version of Pillow installed that does not work with the version of TensorFlow/Keras installed in your env, reinstalling Pillow might help.</p>
python|tensorflow|keras|python-imaging-library
0
165
67,099,008
Matching nearest values in two dataframes of different lengths
<p>If I have two dataframes of different lengths, different labels and different levels of digit precision like so:</p> <pre><code>df1 = pd.DataFrame({'a':np.array([1.2345,2.2345,3.2345]),'b':np.array([4.123,5.123,6.123])}) df2 = pd.DataFrame({'A':np.array([1.2346,2.2343]),'B':np.array([4.1232,5.1239])}) </code></pre> <p>How can I find the rows where the two dataframes have approximately matching values between columns 'a' and 'A' (say within 2 digits of precision) that results in a dataframe like so</p> <pre><code> a b A B ------------------------------------------------ | 1.2345 | 4.123 | 1.2346 | 4.1232 | | 2.2345 | 5.123 | 2.2343 | 5.1239 | </code></pre> <p>Attempts:</p> <p>Attempt #1:</p> <pre><code>matches_df = pd.merge(df1, df2, how='inner', left_on=['a'], right_on = ['A']) </code></pre> <p>This only works if there are exact matches between columns 'a' and 'A' but I'm not sure how to incorporate a fudge factor to allow matching rows that are within 2 digits of precision.</p> <p>Attempt #2</p> <pre><code>matches_df = df1.loc[np.round(df1['a'],2)==np.round(df2['A'],2)] </code></pre> <p>This gives the error &quot;ValueError: Can only compare identically-labeled Series objects&quot; because <em>I think</em> the two dataframes have different labels ('a','b' and 'A','B').</p> <p>Any ideas on how this can be accomplished?</p>
<p>Using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html" rel="nofollow noreferrer">KDTree</a>, you can find the closest math in <code>df1</code> in <code>m O(log n)</code> which <code>n</code> is the number of elements in <code>df2</code> and <code>m</code> number of elements in <code>df1</code>.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np from scipy.spatial import cKDTree df1 = pd.DataFrame({'a':np.array([1.2345,2.2345,3.2345]),'b':np.array([4.123,5.123,6.123])}) df2 = pd.DataFrame({'A':np.array([1.2346,2.2343]),'B':np.array([4.1232,5.1239])}) def spatial_merge_NN(df1, df2, xyz=['A', 'B']): ''' Add features from df2 to df1, taking closest point ''' tree = cKDTree(df2[xyz].values) dists, indices = tree.query(df1[['a','b']].values, k=1) fts = [c for c in df2.columns] for c in fts: df1[c] = df2[c].values[indices] return df1 df_new = spatial_merge_NN(df1, df2, ['A', 'B']) # a b A B # 0 1.2345 4.123 1.2346 4.1232 # 1 2.2345 5.123 2.2343 5.1239 # 2 3.2345 6.123 2.2343 5.1239 </code></pre> <p>It put one dataframe constant ( in this case <code>df1</code>) and iterate through <code>df2</code> and find the closest pair from <code>d2</code> and add that row.</p>
python|pandas|dataframe|matching
1
166
66,929,837
Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 1)
<p>I am trying to get the prediction of my model</p> <pre><code>prediction = model.predict(validation_names) print(prediction) </code></pre> <p>but I get the following error:</p> <pre><code>ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 1) </code></pre> <p>I understand that this is due to the fact that the model accepts data of dimension 4</p> <p>Model:</p> <pre><code>model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(16, (3,3), activation = 'relu', input_shape = (300, 300, 3)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(32, (3,3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64, (3,3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64, (3,3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation = 'relu'), tf.keras.layers.Dense(3, activation = 'softmax') ]) </code></pre> <p>How can I process the prediction data to solve this problem?</p>
<p>Conv2D expects <code>4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'</code></p> <p><strong>Working sample code:</strong></p> <pre><code># The inputs are 28x28 RGB images with `channels_last` and the batch # size is 4. import tensorflow as tf input_shape = (4, 28, 28, 3) x = tf.random.normal(input_shape) y = tf.keras.layers.Conv2D(2, 3, activation='relu', input_shape=input_shape[1:])(x) </code></pre>
python|tensorflow|keras
0
167
66,848,816
reshape dataframe in the required format
<p>I have the dataframe which looks like this:</p> <pre><code> b = {'STORE_ID': ['1234','5678','9876','3456','6789'], 'FULFILLMENT_TYPE': ['DELIVERY','DRIVE','DELIVERY','DRIVE','DELIVERY'], 'LAUNCH_DT':['2020-10-01','2020-10-02','2020-10-03','2020-10-04','2020-10-01']} df_1 = pd.DataFrame(data=b) </code></pre> <p>I would want to reshape it to include a daterange as a column in the dataframe</p> <pre><code>date_range = pd.date_range(start=final_forecasts['FORECAST_DATE'].iloc[0], end=final_forecasts['FORECAST_DATE'].iloc[-1]) </code></pre> <p>I would be getting this daterange from a different dataframe and would like to add it to the dataframe b such that it looks like this:</p> <pre><code>a = {'STORE_ID': ['1234','1234','1234','1234','1234','5678','5678','5678','5678','5678'], 'date_range': ['2020-08-01', '2020-08-02','2020-08-03','2020-08-04','2020-08-05','2020-08-01', '2020-08-02','2020-08-03','2020-08-04','2020-08-05'], 'FULFILLMENT_TYPE':['DELIVERY','DELIVERY','DELIVERY','DELIVERY','DELIVERY','DRIVE','DRIVE','DRIVE','DRIVE','DRIVE'], 'LAUNCH_DT':['2020-10-01','2020-10-01','2020-10-01','2020-10-01','2020-10-01','2020-10-02','2020-10-02','2020-10-02','2020-10-02','2020-10-02']} df = pd.DataFrame(data=a) df </code></pre> <p>I would need a separate row for each date.. how can i achieve this?</p>
<p>If need add same <code>date_range</code> to each row of <code>df_1</code> use cross join by new DataFrame:</p> <pre><code>df = (df_1.assign(a=1) .merge(pd.DataFrame({'date_range':date_range,'a':1}), on='a') .drop('a', axis=1)) </code></pre>
python-3.x|pandas
1
168
66,790,100
Variable array creation using numpy operations
<p>I wish to create a variable array of numbers in numpy while skipping a chunk of numbers. For instance, If I have the variables:</p> <pre><code>m = 5 k = 3 num = 50 </code></pre> <p>I want to create a linearly spaced numpy array starting at <code>num</code> and ending at <code>num - k</code>, skip <code>k</code> numbers and continue the array generation. Then repeat this process m times. For example, the above would yield:</p> <pre><code>np.array([50, 49, 48, 47, 44, 43, 42, 41, 38, 37, 36, 35, 32, 31, 30, 29, 26, 25, 24, 23]) </code></pre> <p>How can I accomplish this via Numpy?</p>
<p>You can try:</p> <pre><code>import numpy as np m = 5 k = 3 num = 50 np.hstack([np.arange(num - 2*i*k, num - (2*i+1)*k - 1, -1) for i in range(m)]) </code></pre> <p>It gives:</p> <pre><code>array([50, 49, 48, 47, 44, 43, 42, 41, 38, 37, 36, 35, 32, 31, 30, 29, 26, 25, 24, 23]) </code></pre> <p><strong>Edit:</strong></p> <p>@JanChristophTerasa posted an answer (now deleted) that avoided Python loops by masking some elements of an array obtained using <code>np.arange()</code>. Here is a solution inspired by that idea. It works much faster than the above one:</p> <pre><code>import numpy as np m = 5 k = 3 num = 50 x = np.arange(num, num - 2*k*m , -1).reshape(-1, 2*k) x[:, :k+1].ravel() </code></pre>
python|arrays|numpy
4
169
47,530,736
correct accessing of slices with duplicate index-values present
<p>I have a dataframe with an index that sometimes contains rows with the same index-value. Now I want to slice that dataframe and set values based on row-indices.</p> <p>Consider the following example:</p> <pre><code>import pandas as pd df = pd.DataFrame({'index':[1,2,2,3], 'values':[10,20,30,40]}) df.set_index(['index'], inplace=True) df1 = df.copy() df2 = df.copy() #copy warning df1.iloc[0:2]['values'] = 99 print(df1) df2.loc[df.index[0:2], 'values'] = 99 print(df2) </code></pre> <p>df1 is the expected result, but gives me a SettingWithCopyWarning. df2 seems to be the suggested way of accessing by the doc, but gives me the wrong result (because of the duplicate index)</p> <p>Is there a "proper" way to set those values correctly with the duplicate index-values present?</p>
<p><code>.loc</code> is not recommended when you have duplicate index. So you have to go for position based selection <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a>. Since we need to pass the positions, we have to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>get_loc</code></a> for getting position of column:</p> <pre><code>print (df2.columns.get_loc('values')) 0 df1.iloc[0:2, df2.columns.get_loc('values')] = 99 print(df1) values index 1 99 2 99 2 30 3 40 </code></pre>
pandas|indexing
2
170
47,189,031
Initialize empty vector with 3 dimensions
<p>I want to initialize an empty vector with 3 columns that I can add to. I need to perform some l2 norm distance calculations on the rows after I have added to it, and I'm having the following problem. </p> <p>I start with an initial empty array:</p> <pre><code>accepted_clusters = np.array([]) </code></pre> <p>Then I add my first 1x3 set of values to this:</p> <pre><code>accepted_clusters = np.append(accepted_clusters, X_1) </code></pre> <p>returning:</p> <pre><code>[ 0.47843416 0.50829221 0.51484499] </code></pre> <p>Then I add a second set of 1x3 values in the same way, and I get the following:</p> <pre><code>[ 0.47843416 0.50829221 0.51484499 0.89505277 0.8359252 0.21434642] </code></pre> <p>However, what I want is something like this:</p> <pre><code>[ 0.47843416 0.50829221 0.51484499] [ 0.89505277 0.8359252 0.21434642] .. and so on </code></pre> <p>This would enable me to calculate distances between the rows. Ideally, the initial empty vector would be of undefined length, but something like a 10x3 of zeros would also work if the code for that is easy. </p>
<p>The most straightforward way is to use <code>np.vstack</code>:</p> <pre><code>In [9]: arr = np.array([1,2,3]) In [10]: x = np.arange(20, 23) In [11]: arr = np.vstack([arr, x]) In [12]: arr Out[12]: array([[ 1, 2, 3], [20, 21, 22]]) </code></pre> <p>Note, your entire approach has major code smell, doing the above in a loop will give you quadratic complexity. Perhaps you should work with a list and then convert to an array at the end (which will at least be linear-time). Or maybe rethink your approach entirely. </p> <p>Or, as you imply, you could pre-allocate your array:</p> <pre><code>In [18]: result = np.zeros((10, 3)) In [19]: result Out[19]: array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) In [20]: result[0] = x In [21]: result Out[21]: array([[ 20., 21., 22.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) </code></pre>
python|arrays|numpy
3
171
47,317,617
Pandas iterrows get row string as list
<p>I have a df in pandas which looks like:</p> <pre><code>id name values 1 a cat dog 2 b bird fly </code></pre> <p>I'm currently doing: </p> <pre><code>for index, row in df.iterrows(): print row["values"] </code></pre> <p>However, that prints the entire cell: <code>"cat dog"</code> or <code>"bird fly"</code>.</p> <p>I've tried doing: </p> <pre><code>print row["values"][0] </code></pre> <p>That instead prints a single character, so <code>"c"</code> and <code>"b"</code>.</p> <p>How can I get instead something like <code>["cat", "dog"]</code> and <code>["bird", "fly"]</code></p>
<p>You need to split the data</p> <pre><code>df['values'].str.split() 0 [cat, dog] 1 [bird, fly] </code></pre> <p>To get the individual element, </p> <pre><code>df['values'].str.split().str[0] </code></pre> <p>And you get</p> <pre><code>0 cat 1 bird </code></pre>
python|pandas
2
172
68,306,769
Pandas: how to transpose part of a dataframe
<p>I have the following dataframe:</p> <pre><code> A B C param1 param2 param3 0 1 4 NaN val1 val4 val7 1 2 5 NaN val2 val5 val8 2 3 6 NaN val3 val6 val9 </code></pre> <p>Which I'd like to modify to get:</p> <pre><code> A B C Values 0 1 4 param1 val1 1 1 4 param2 val4 2 1 4 param3 val7 3 2 5 param1 val2 4 2 5 param2 val5 5 2 5 param3 val8 6 3 6 param1 val3 7 3 6 param2 val6 8 3 6 param3 val9 </code></pre> <p>How do I achieve this ?</p>
<pre><code>df.melt(id_vars = ['A','B'], value_vars = ['param1','param2', 'param3']) </code></pre> <p>You can check melt function and it can change the label for the id_vars and value_vars.</p>
python|pandas|dataframe
0
173
59,100,941
Python Pandas Library Resample By Truncate Date
<p>use python3 library <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">pandas</a>, i have a data in excel file like this</p> <pre><code> Id | Date | count ----+-------------------------+----------- 1 | '2019/10/01 10:40' | 1 ----+------------------------------------- 2 | '2019/10/01 10:43' | 2 ----+------------------------------------- 3 | '2019/10/02 10:40' | 3 ----+------------------------------------- 4 | '2019/10/05 10:40' | 4 ----+------------------------------------- 5 | '2019/10/08 10:40' | 5 ----+------------------------------------- 6 | '2019/10/09 10:40' | 6 ----+------------------------------------- 7 | '2019/10/15 10:40' | 7 </code></pre> <p>i want group by this example by week and time. for example my needed result is:</p> <pre><code> Id | Week Time | count ----+-------------------------+----------- 1 | 'Tuesday 10:40' | 1 ----+------------------------------------- 2 | 'Tuesday 10:43' | 2 ----+------------------------------------- 3 | 'Wednesday 10:40' | 3 ----+------------------------------------- 4 | 'Saturday 10:40' | 4 ----+------------------------------------- 5 | 'Tuesday 10:40' | 5 ----+------------------------------------- 6 | 'Wednesday 10:40' | 6 ----+------------------------------------- 7 | 'Tuesday 10:40' | 7 </code></pre> <p>and after resample by pandas i get this result:</p> <pre><code> Week Time | sum | count | avg -------------------------+-------+-------+--------- 'Tuesday 10:40' | 14 | 3 | 4.66 -------------------------+-------+-------+--------- 'Tuesday 10:43' | 2 | 1 | 2.00 -------------------------+-------+-------+--------- 'Wednesday 10:40' | 9 | 2 | 4.50 ---------------------------------+-------+--------- 'Saturday 10:40' | 4 | 1 | 4.00 </code></pre> <p>can i get this result from resample method of pandas library ?</p>
<p>I believe you need custom format of datetimes by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>Series.dt.strftime</code></a> and then aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a>:</p> <pre><code>df['Date'] = pd.to_datetime(df['Date']).dt.strftime('%A %H:%M') #if necessary remove trailing ' #df['Date'] = pd.to_datetime(df['Date'].str.strip("'")).dt.strftime('%A %H:%M') df = df.groupby('Date', sort=False)['count'].agg(['sum','count', 'mean']) print (df) sum count mean Date Tuesday 10:40 13 3 4.333333 Tuesday 10:43 2 1 2.000000 Wednesday 10:40 9 2 4.500000 Saturday 10:40 4 1 4.000000 </code></pre>
python|python-3.x|pandas|dataframe|resampling
2
174
59,311,945
Jupyter Notebook - Kernel dies during training - tensorflow-gpu 2.0, Python 3.6.8
<p>Since I am kind of new in this field I tried following the official tutorial from tensorflow for predicting time series. <a href="https://www.tensorflow.org/tutorials/structured_data/time_series" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/structured_data/time_series</a></p> <p>Following problem occurs: -When training a multivariate model, after 2 or 3 epochs the kernel dies and restarts.</p> <p>However this doesn't happen with a simpler univariate model, which has only one LSTM layer (not really sure if this makes a difference).</p> <p>Second however, this problem just happened today. Yesterday the training of the multivariate model was possible and error-free.</p> <p>As can be seen in the tutorial in the link below the model looks like this:</p> <pre><code>multi_step_model = tf.keras.models.Sequential() multi_step_model.add(tf.keras.layers.LSTM(32,return_sequences=True,input_shape=x_train_multi.shape[-2:])) multi_step_model.add(tf.keras.layers.LSTM(16, activation='relu')) multi_step_model.add(tf.keras.layers.Dense(72)) multi_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae') </code></pre> <p>And the kernel dies after executing the following cell (usually after 2 or 3 epochs).</p> <pre><code>multi_step_history = multi_step_model.fit(train_data_multi, epochs=10, steps_per_epoch=300, validation_data=val_data_multi, validation_steps=50) </code></pre> <p>I have uninstalled and reinstalled tf, restarted my laptop, but nothing seems to work.</p> <p>Any ideas?</p> <p>OS: Windows 10 Surface Book 1</p>
<p>Problem was a too big batch size. Reducing it from 1024 to 256 solved the crashing problem.</p> <p>Solution taken from the comment of rbwendt on <a href="https://github.com/tensorflow/tensorflow/issues/9829" rel="nofollow noreferrer">this thread on github</a>.</p>
python-3.x|tensorflow|jupyter-notebook
0
175
59,155,277
Looping through list with dataframe elements in python
<p>I want to iterate over a list, which has dataframes as its elements. </p> <p>Example: ls is my list with below elements (two dataframes)</p> <pre><code> seq score status 4366 CGAGGCTGCCTGTTTTCTAGTTG 5.15 negative 5837 GGACCTTTTTTACAATATAGCCA 3.48 negative 96 TTTCTAGCCTACCAAAATCGGAG -5.27 negative 1369 CTTCCTATCTTCATTCTTCGACT 1.28 negative 1223 CAAGTTTGT 2.06 negative 5451 TGTTTCCACACCTGTCTCAGCTC 4.48 negative 1277 GTACTGTGGAATCTCGGCAGGCT 4.87 negative 5299 CATAATGAATGCCCCATCAATTG -7.19 negative 3477 ATGGCACTG -3.60 negative 2953 AGTAATTCTGTTGCCTGAAGATA 2.86 negative 4586 TGGGCAAGT 2.48 negative 3746 AATGAGAGG -3.67 negative, seq score status 1983 AGCAGATCAAACGGGTAAAGGAC -4.81 negative 3822 CCCTGGCCCACGCACTGCAGTCA 3.32 negative 1127 GCAGAGATGCTGATCTTCACGTC -6.77 negative 3624 TGAGTATGG 0.60 negative 4559 AAGGTTGGG 4.94 negative 4391 ATGAAGATCATCGAAATCAGTTT -2.09 negative 4028 TCTCCGACAATGCCTATCAGTAC 1.14 negative 2694 CAGGGAACT 0.98 negative 2197 CTTCCATTGAGCTGCTCCAGCAC -0.97 negative 2025 TGTGATCTGGCTGCACGCACTGT -2.13 negative 5575 CCAGAAAGG -2.45 negative 275 TCTGTTGGGTTTTCATACAGCTA 7.11 negative </code></pre> <p>When I am accessing its elements, I am getting following error. <strong>list indices must be integers, not DataFrame</strong></p> <p>I tried the following code:</p> <pre><code>cut_off = [1,2,3,4] for i in ls: for co in cut_off: print "Negative set : " + "cut off value =", str( co), number of variants = ", str((ls[i]['score'] &gt; co).sum()) </code></pre> <p>I want to access each dataframe element in the list and compare the score value of each row. If it is more than the cut_off value, it should sum it and give me the total number of rows which value > cut_off value.</p> <p><strong>Expected output:</strong> Negative set : cut off value = 0 , number of variants = 8</p> <p>Thanks</p>
<p>This should work ok</p> <pre><code>cut_off = [1,2,3,4] for df in ls: for co in cut_off: print "Negative set : " + "cut off value =", str( co), number of variants = ", str((df['score'] &gt; co).sum()) </code></pre>
python|pandas
1
176
59,343,025
Adding few columns to data frame calculating a median corresponding with other 3 columns
<p>I have the following dataframe:</p> <pre><code> Name Number Date Time Temperature RH Height AH 0 Rome 301 01/10/2019 02:00 20.5 89 10 15.830405 1 Rome 301 01/10/2019 05:00 19.4 91 10 15.176020 .. ... ... ... ... ... .. ... ... 91 Napoli 600 02/10/2019 11:00 30.5 52 5 16.213860 92 Napoli 600 02/10/2019 14:00 30.3 51 5 15.731054 </code></pre> <p>Under "Name" there are a few locations, under AH is the Absolute Humidity. I want to calculate the median AH per each location for each Date (There are 2 days) and to display each of these daily medians in new columns called <code>med_AH_[Date]</code>. (In total 2 new columns).</p> <p>How do I do this?</p> <p>This is what I have until now:</p> <pre><code>my_data['med_AH_[Date]']= my_data.groupby('Name')['AH'].transform('median') </code></pre> <p>But it naturally provides me only the medians by Name and with no division between dates.</p>
<p>I believe you just need to update your groupby to include <code>Date</code>:</p> <pre><code>my_data['med_AH_[Date]']= my_data.groupby(['Name', 'Date'])['AH'].transform('median') </code></pre>
python|pandas|dataframe|transform|pandas-groupby
0
177
59,400,154
passing value from panda dataframe to http request
<p>I'm not sure how I should ask this question. I'm looping through a csv file using panda (at least I think so). As I'm looping through rows, I want to pass a value from a specific column to run an http request for each row. </p> <p>Here is my code so far:</p> <pre><code>def api_request(request): fs = gcsfs.GCSFileSystem(project=PROJECT) with fs.open('gs://project.appspot.com/file.csv') as f: df = pd.read_csv(f,) value = df[['ID']].to_string(index=False) print(value) response = requests.get(REQUEST_URL + value,headers={'accept': 'application/json','ClientToken':TOKEN } ) json_response = response.json() print(json_response) </code></pre> <p>As you can see, I'm looping through the csv file to get the ID to pass it to my request url. </p> <p>I'm not sure I understand the issue but looking at the console log it seems that <code>print(value)</code> is in the loop when the response request is not. In other words, in the console log I'm seeing all the ID printed but I'm seeing only one http request which is empty (probably because the ID is not correctly passed to it). </p> <p>I'm running my script with cloud functions. </p>
<p>Actually, forgo the use of the Pandas library and simply iterate through csv</p> <pre><code>import csv def api_request(request): fs = gcsfs.GCSFileSystem(project=PROJECT) with fs.open('gs://project.appspot.com/file.csv') as f: reader = csv.reader(f) next(reader, None) # SKIP HEADERS for row in reader: # LOOP THROUGH GENERATOR (NOT PANDAS SERIES) value = row[0] # SELECT FIRST COLUMN (ASSUMED ID) response = requests.get( REQUEST_URL + value, headers={'accept': 'application/json', 'ClientToken': TOKEN } ) json_response = response.json() print(json_response) </code></pre>
python|python-3.x|pandas|google-cloud-functions
2
178
14,160,806
histogram matching in Python
<p>I am trying to do histogram matching of simulated data to observed precipitation data. The below shows a simple simulated case. I got the CDF of both the simulated and observed data and got stuck theree. I hope a clue would help me to get across..Thanks you in advance</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import interp1d import scipy.stats as st sim = st.gamma(1,loc=0,scale=0.8) # Simulated obs = st.gamma(2,loc=0,scale=0.7) # Observed x = np.linspace(0,4,1000) simpdf = sim.pdf(x) obspdf = obs.pdf(x) plt.plot(x,simpdf,label='Simulated') plt.plot(x,obspdf,'r--',label='Observed') plt.title('PDF of Observed and Simulated Precipitation') plt.legend(loc='best') plt.show() plt.figure(1) simcdf = sim.cdf(x) obscdf = obs.cdf(x) plt.plot(x,simcdf,label='Simulated') plt.plot(x,obscdf,'r--',label='Observed') plt.title('CDF of Observed and Simulated Precipitation') plt.legend(loc='best') plt.show() # Inverse CDF invcdf = interp1d(obscdf,x) transfer_func = invcdf(simcdf) plt.figure(2) plt.plot(transfer_func,x,'g-') plt.show() </code></pre>
<p>I tried to reproduce your code, and got the following error:</p> <pre><code>ValueError: A value in x_new is above the interpolation range. </code></pre> <p>If you look at the plot of your two CDFs it is pretty straight forward to figure out what is going on:</p> <p><img src="https://i.stack.imgur.com/n6KlP.png" alt="enter image description here"></p> <p>When you now define <code>invcdf = interp1d(obscdf, x)</code>, notice that <code>obscdf</code> ranges from</p> <pre><code>&gt;&gt;&gt; obscdf[0] 0.0 &gt;&gt;&gt; obscdf[-1] 0.977852889924409 </code></pre> <p>and so <code>invcdf</code> can only interpolate values between those limits: beyond them we would have to do extrapolation, which is not all that well defined. SciPy's default behavior is to raise an error when asked to extrapolate. Which is exactly what happens when you ask for <code>invcdf(simcdf)</code>, because</p> <pre><code>&gt;&gt;&gt; simcdf[-1] 0.99326205300091452 </code></pre> <p>is beyond the interpolation range.</p> <p>If you read <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow noreferrer">the <code>interp1d</code> docs</a> you will see that this behavior can be modified doing</p> <pre><code>invcdf = interp1d(obscdf, x, bounds_error=False) </code></pre> <p>and now everything works out fine, although you need to reverse the order of your plotting arguments to <code>plt.plot(x, transfer_func,'g-')</code> to get the same as in the figure you posted:</p> <p><img src="https://i.stack.imgur.com/tRmas.png" alt="enter image description here"></p>
python|numpy|histogram|cdf
4
179
13,958,129
How to apply function to date indexed DataFrame
<p>I am having lots of issues working with DataFrames with date indexes.</p> <pre><code>from pandas import DataFrame, date_range # Create a dataframe with dates as your index data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] idx = date_range('1/1/2012', periods=10, freq='MS') df = DataFrame(data, index=idx, columns=['Revenue']) df['State'] = ['NY', 'NY', 'NY', 'NY', 'FL', 'FL', 'GA', 'GA', 'FL', 'FL'] In [6]: df Out[6]: Revenue State 2012-01-01 1 NY 2012-02-01 2 NY 2012-03-01 3 NY 2012-04-01 4 NY 2012-05-01 5 FL 2012-06-01 6 FL 2012-07-01 7 GA 2012-08-01 8 GA 2012-09-01 9 FL 2012-10-01 10 FL </code></pre> <p>I am trying to add an additional column named <code>'Mean'</code> with the group averages:</p> <h3>I tried this, but it does not work:</h3> <pre><code>df2 = df df2['Mean'] = df.groupby(['State'])['Revenue'].apply(lambda x: mean(x)) In [9]: df2.head(10) Out[9]: Revenue State Mean 2012-01-01 1 NY NaN 2012-02-01 2 NY NaN 2012-03-01 3 NY NaN 2012-04-01 4 NY NaN 2012-05-01 5 FL NaN 2012-06-01 6 FL NaN 2012-07-01 7 GA NaN 2012-08-01 8 GA NaN 2012-09-01 9 FL NaN 2012-10-01 10 FL NaN </code></pre> <h3>But I am trying to get:</h3> <pre><code> Revenue State Mean 2012-01-01 1 NY 2.5 2012-02-01 2 NY 2.5 2012-03-01 3 NY 2.5 2012-04-01 4 NY 2.5 2012-05-01 5 FL 7.5 2012-06-01 6 FL 7.5 2012-07-01 7 GA 7.5 2012-08-01 8 GA 7.5 2012-09-01 9 FL 7.5 2012-10-01 10 FL 7.5 </code></pre> <p>How can I get this DataFrame?</p>
<p>You nearly had it! First create the groupby object:</p> <pre><code>means = df.groupby('State').mean() In [5]: means Out[5]: Revenue State FL 7.5 GA 7.5 NY 2.5 </code></pre> <p>Then <code>apply</code> this to each state in the DataFrame:</p> <pre><code>df['mean'] = df['State'].apply(lambda x: means.ix[x]['Revenue']) In [7]: df Out[7]: Revenue State mean 2012-01-01 1 NY 2.5 2012-02-01 2 NY 2.5 2012-03-01 3 NY 2.5 2012-04-01 4 NY 2.5 2012-05-01 5 FL 7.5 2012-06-01 6 FL 7.5 2012-07-01 7 GA 7.5 2012-08-01 8 GA 7.5 2012-09-01 9 FL 7.5 2012-10-01 10 FL 7.5 </code></pre>
indexing|group-by|pandas
6
180
45,026,934
How to perform convolutions individually per feature map
<p>I have data in the format NHWC: <code>100 x 64 x 64 x 3</code>. I want to apply the laplacian filter to each channel separately. I want the output as <code>100 x 64 x 64 x 3</code>. </p> <pre><code>k = tf.reshape(tf.constant([[0, -1, 0], [-1, 4, -1], [0, -1, 0]], tf.float32), [3, 3, 1, 1]) </code></pre> <p>I tried this, but this throws an error of dimensions. It expects 3 channels as input. output = tf.abs(tf.nn.conv2d(input, k, strides=[1, 1, 1, 1], padding='SAME'))</p> <p>I modified <code>k = tf.reshape(tf.constant([[0, -1, 0], [-1, 4, -1], [0, 1, 0]]*3, tf.float32), [3, 3, 3, 1])</code>, but this just outputs 1 feature map <code>100 x 64 x 64 x 1</code>. `</p> <p>I tried using <code>tf.nn.depthwise_conv2d</code> but its throwing the same error. How do I actually implement it?</p> <pre><code>output = tf.abs(tf.nn.depthwise_conv2d(input, k, strides=[1, 1, 1, 1], padding='SAME')) </code></pre>
<p>This is what <code>tf.nn.depthwise_conv2d</code> does. However, it is more general than that and actually let you choose one or more convolution kernels <em>per channel</em>.</p> <p>If you want to have the same kernel for all channels, you need to duplicate the kernel to match the number of channels. E.g.</p> <pre><code># my 2D conv kernel k = tf.constant([[0, -1, 0], [-1, 4, -1], [0, 1, 0]], tf.float32) # duplicate my kernel channel_in times k = tf.tile(k[...,tf.newaxis], [1, 1, channel_in])[...,tf.newaxis] # apply conv tf.nn.depthwise_conv2d(input, k, strides=[1, 1, 1, 1], padding='SAME') </code></pre>
python|tensorflow
2
181
57,202,717
How to iterate over first element of each nested tensor in tensorflow, python?
<p>I am working with a tensor which looks as follows:</p> <pre><code>X = tf.constant([['a', 'y', 'b'], ['b', 'y', 'a'], ['a', 'y', 'c'], ['c', 'y', 'a'], ['a', 'y', 'd'], ['c', 'y', 'd'], ['b', 'y', 'c'], ['f', 'y', 'e']]) </code></pre> <p>I wish to iterate over this in a manner that I am able to retrieve the first element of each nested tensor, i.e., 'a', 'b', 'a', 'c',... and perform some operation in that iteration.</p> <p>I have tried using tf.slice() operation but I am new to tensorflow and am unable to figure out how to go about it. Any help will be appreciated. Thanks!</p>
<p>You probably have not evaluated the tensor yet. Use <code>tensor.eval()</code> or <code>session.run(tensor)</code> to evaluate the result:</p> <pre><code>import tensorflow as tf X = tf.constant([['a', 'y', 'b'], ['b', 'y', 'a'], ['a', 'y', 'c'], ['c', 'y', 'a'], ['a', 'y', 'd'], ['c', 'y', 'd'], ['b', 'y', 'c'], ['f', 'y', 'e']]) with tf.Session() as sess: for i in X[:,0].eval(): element= i.decode("utf-8") print(element) # Or using sess.run() #for j in sess.run(X[:,0]): # element= j.decode("utf-8") # print(element) </code></pre> <p>Output:</p> <pre><code>a b a c a c b f </code></pre>
python|tensorflow
2
182
57,086,868
How to append 2 numpy Image Arrays with different dimensions and shapes using numpy
<p>I am making an input dataset which will have couple of thousands of images which all don't have same sizes but have same number of channels. I need to make these different images into one stack.</p> <pre><code>orders = (channels, size, size) Image sizes = (3,240,270), (3,100,170), etc </code></pre> <p>I have tried appending it to axis of 0 and one and inserting too.</p> <pre><code>Images = append(Images, image, axis = 0) </code></pre> <pre><code> File &quot;d:/Python/advanced3DFacePointDetection/train.py&quot;, line 25, in &lt;module&gt; Images = np.append(Images, item, axis=0) File &quot;C:\Users\NIK\AppData\Roaming\Python\Python37\site-packages\numpy\lib\function_base.py&quot;, line 4694, in append return concatenate((arr, values), axis=axis) ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>Ideal output shape is like (number of images, 3) 3 for number of channels and it contains different shapes of images after that.</p>
<p>If you don't want to resize the image, choose the biggest one and padding all picture become same shape with it, i used to answer how to pad in this question: <a href="https://stackoverflow.com/questions/56420792/can-we-resize-an-image-from-64x64-to-256x256-without-increasing-the-size/56421174#56421174">Can we resize an image from 64x64 to 256x256 without increasing the size</a> .</p> <p>When run that script in loop for all your image, create a list to save all their shape. When you want to take the original image, just take image at index x in your array and shape x in your list then crop padding image with original size.</p>
python|arrays|numpy
0
183
45,844,805
C/C++ speed ODE integration from Python
<p>I am numerically integrating some ODE's, e.g.</p> <pre><code>y'(t) = f(y(t), t) </code></pre> <p>This is easily done using for instance scipy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html" rel="nofollow noreferrer">integrate.ode</a>. The function <code>f</code> is defined using standard Python, e.g.:</p> <pre><code>def f(y, t, k): return -k*y**3 </code></pre> <p>My understand is that this means that the fortran/C implementation used by integrate.ode must do call-backs to Python all the time and this can be quite slow. <b>My question is whether there is a way to avoid this?</b></p> <p>Preferably I am looking for a package that lets me inline in my Python code a C-snippet, e.g.:</p> <pre><code>double f(double y, double t, double k) { return -k*pow(y,3); } </code></pre> <p>Is there any ODE integrator library for Python that allows this?</p> <p>I know there are packages like <a href="https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/weave.html" rel="nofollow noreferrer">scipy.weave</a> that could be used to inline C code in Python, but I can't see an easy way to interface with integrate.ode. In all cases I think interfacing will have to go through a Python function call.</p> <p>Inline C-code like this exist in other libraries such as <a href="https://fenicsproject.org/olddocs/dolfin/1.5.0/python/programmers-reference/functions/expression/Expression.html" rel="nofollow noreferrer">fenics</a>, where <code>Expression</code> allow jit compiled C code.</p>
<p>There is a scikit devoted to this that extends the capabilities of <code>scipy.integrate</code>.</p> <p>It is available here: <a href="https://github.com/bmcage/odes" rel="nofollow noreferrer">https://github.com/bmcage/odes</a></p> <p>The documentation contains an example of ODE integration sped up by implementing the right hand side in Cython: <a href="https://github.com/bmcage/odes/blob/master/docs/ipython/Cython%20cvode%20speedup.ipynb" rel="nofollow noreferrer">https://github.com/bmcage/odes/blob/master/docs/ipython/Cython%20cvode%20speedup.ipynb</a></p> <p>As DavidW mentions, there is a new feature in SciPy to implement compiled style callbacks but only the quadrature routines can make use of it at this time.</p>
python|numpy|scipy|cython
2
184
23,266,343
numpy: copying value defaults on integer indexing vs boolean indexing
<p>I have recently started studying McKinney's Python for data analysis. This tripped me up in the book:</p> <blockquote> <p>Array slices are views on the original array. This means data is not copied and any modifications to the view will be reflected in the source array ... As NumPy has been designed with large data use case in mind, you could imagine performance and memory problems if NumPy insisted on copying data left to right.</p> </blockquote> <p>Fine. Seems like a sensible design choice. But two pages later it says:</p> <blockquote> <p>Selecting data from an array by boolean indexing always creates a copy of the data, even if the returned array is unchanged.</p> </blockquote> <p>Wait, what? Also,</p> <blockquote> <p>You can even mix and match boolean arrays with slices ... e.g. <code>data[names == 'Bob', 2:]</code></p> </blockquote> <p>Now what would that return? A view on a copy of the data? And why is this behavior the way it is? Coming from R, I see boolean indexing and location based indexing equally frequently used techniques. If NumPy has been designed to avoid copying memory, what drives this design choice?</p> <p>Thanks.</p>
<p>Let's assume a 1D array. The data in memory would look something like:</p> <pre><code>10 | 11 | 12 | 13 | 14 | 15 | 16 </code></pre> <p>Accessing an element by index is trivial. Just take the position of the first element, and jump <code>n</code> steps. So, for <code>arr[2]</code>:</p> <pre><code>10 | 11 | 12 | 13 | 14 | 15 | 16 ^ </code></pre> <p>I can get the position in memory with just one multiplication. Fast and easy.</p> <p>I can do a slice, and say "take only <code>arr2 = arr[2:-1]</code>":</p> <pre><code>10 | 11 | 12 | 13 | 14 | 15 | 16 ^----^----^----^ </code></pre> <p>Now, the memory layout is very similar. Getting an element is a multiplication from a new starting point. <code>arr2[1]</code>:</p> <pre><code>10 | 11 | 12 | 13 | 14 | 15 | 16 (ignore) -----^---------- </code></pre> <p>You can do a fancier trick, and say <code>arr3 = arr[::2]</code>, take all the elements jumping one each.</p> <pre><code>10 | 11 | 12 | 13 | 14 | 15 | 16 ^---------^---------^---------^ </code></pre> <p>Again, getting indexes of <code>arr3</code> is very simple: just do a multiplication, but now the size is bigger. This is what strides are for, they tell you the sizes of the blocks and how to get elements by indexing. Strides are even more powerful in more dimensions. This is, by the way, the way we can turn memory (1D) into a matrix (2D).</p> <p>Now, we get to boolean arrays. If my mask is: <code>T F T T F F T</code> and I ask you for the third element, you would need to transvers the mask, find which is the third true, and then get its index; thus, very slow. So, when taking a boolean mask we have to make a copy of the data. There are some masks than can be represented with strides, but not in general, so for consistency, always a copy.</p> <p>As a side note, sometimes, the cost of making a copy is worth performance-wise. If you want to do many operations reading "every fifth element of an array", the data in memory will not be aligned, so the CPU will have to wait for it to be fetched every time. It would then be faster to make a single copy (will be continuous), and work with it.</p>
python|arrays|numpy
8
185
35,728,838
Pandas: Get an if statement/.loc to return the index for that row
<p>I've got a dataframe with 2 columns and I'm adding a 3rd. </p> <p>I want the 3rd column to be dependant on the value of the 2nd either returning a set answer or the corresponding index for that row. </p> <p>An example the database is below:</p> <pre><code>print (df) Amount Percentage Country Belgium 20 .0952 France 50 .2380 Germany 60 .2857 UK 80 .3809 </code></pre> <p>Now I want my new third column to say 'Other' if the percentage is below 25% and to say the name of the country if the percentage is above 25%. So this is what I've written:</p> <pre><code>df.['Country']='Other') df.loc[df['percentage']&gt;0.25, 'Country']=df.index </code></pre> <p>Unfortunately my output doesn't give the equivalent index; it just gives the index in order:</p> <pre><code> print (df) Amount Percentage Country Country Belgium 20 .0952 Other France 50 .2380 Other Germany 60 .2857 Belgium UK 80 .3809 France </code></pre> <p>Obviously I want to see Germany across from Germany and UK across from UK. How can I get it to give me the index which is in the same row as the number which trips the threshold in my code?</p>
<p>You can try <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p> <pre><code>df['Country'] = np.where(df['Percentage']&gt;0.25, df.index, 'Other') print df Amount Percentage Country Country Belgium 20 0.0952 Other France 50 0.2380 Other Germany 60 0.2857 Germany UK 80 0.3809 UK </code></pre> <p>Or create <code>Series</code> from <code>index</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a>:</p> <pre><code>df['Country']='Other' df.loc[df['Percentage']&gt;0.25, 'Country']=df.index.to_series() print df Amount Percentage Country Country Belgium 20 0.0952 Other France 50 0.2380 Other Germany 60 0.2857 Germany UK 80 0.3809 UK </code></pre>
python|if-statement|pandas|dataframe
2
186
35,756,601
Debug for reading JSON file with Python Pandas
<p>I got stuck when I was trying to simply read JSON file with <code>Pandas.read_json</code>. When I try with this sample dataset, it's great. </p> <pre><code>import pandas as pd df = pd.read_json('sample.json') </code></pre> <p>My sample JSON file looks like below:</p> <pre><code>[{"field1": "King's Landing", "field2": 4, "field3": "2014-01-25", "field4": 4.7, "field5": 1.1, "field6": "2014-06-17", "field7": "iPhone", "field8": 15.4, "field9": true, "field10": 46.2, "field11": 3.67, "field12": 5.0}, {"field1": "Astapor", "field2": 0, "field3": "2014-01-29", "field4": 5.0, "field5": 1.0, "field6": "2014-05-05", "field7": "Android", "field8": 0.0, "field9": false, "field10": 50.0, "field11": 8.26, "field12": 5.0}, {"field1": "Astapor", "field2": 3, "field3": "2014-01-06", "field4": 4.3, "field5": 1.0, "field6": "2014-01-07", "field7": "iPhone", "field8": 0.0, "field9": false, "field10": 100.0, "field11": 0.77, "field12": 5.0}] </code></pre> <p>Unfortunately, when I just tried simply replace with file name with my full dataset, it returns the following error: </p> <pre><code>Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-63-02c20a7d81eb&gt;", line 1, in &lt;module&gt; df1 = pd.read_json('train.json') File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/json.py", line 210, in read_json date_unit).parse() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/json.py", line 278, in parse self._parse_no_numpy() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/json.py", line 495, in _parse_no_numpy loads(json, precise_float=self.precise_float), dtype=None) ValueError: Expected object or value </code></pre> <p>Anyone could help me debug why it says this?</p>
<p>I guess you have misspelled your JSON filename...</p> <p>the following script gives me exactly the same error message:</p> <pre><code>import pandas as pd df = pd.read_json('THERE_IS_NO_SUCH_FILE.json') </code></pre> <p>You may also want to validate your JSON file <a href="https://jsonformatter.curiousconcept.com/" rel="nofollow">here</a></p> <p>If your JSON file is too big to be parsed online try the following:</p> <pre><code>python -m json.tool your_json_file.json </code></pre> <p>It should show you the place where the first parsing/validation error occurs</p>
python|json|pandas
2
187
11,883,072
Python and numpy - converting multiple values in an array to binary
<p>I have a numpy array that is rather large, about 1mill. The distinct number of numbers is about 8 numbered 1-8.</p> <p>Lets say I want given the number 2, I would like to recode all 2's to 1 and the rest to 0's.</p> <pre><code>i.e. 2==&gt;1 1345678==0 Is there a pythonic way to do this with numpy? [1,2,3,4,5,6,7,8,1,2,3,4,5,6,7,8]=&gt; [0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0] </code></pre> <p>Thanks</p>
<p>That's the result of <code>a == 2</code> for a NumPy array <code>a</code>:</p> <pre><code>&gt;&gt;&gt; a = numpy.random.randint(1, 9, size=20) &gt;&gt;&gt; a array([4, 5, 1, 2, 5, 7, 2, 5, 8, 2, 4, 6, 6, 1, 8, 7, 1, 7, 8, 7]) &gt;&gt;&gt; a == 2 array([False, False, False, True, False, False, True, False, False, True, False, False, False, False, False, False, False, False, False, False], dtype=bool) &gt;&gt;&gt; (a == 2).astype(int) array([0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) </code></pre> <p>If you want to change <code>a</code> in place, the most efficient way to do so is to use <code>numpy.equal()</code>:</p> <pre><code>&gt;&gt;&gt; numpy.equal(a, 2, out=a) array([0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) </code></pre>
python|numpy
5
188
28,767,642
How to compare two lists in python
<p>Suppose I have two lists (or <code>numpy.array</code>s):</p> <pre><code>a = [1,2,3] b = [4,5,6] </code></pre> <p>How can I check if each element of <code>a</code> is smaller than corresponding element of <code>b</code> at the same index? (I am assuming indices are starting from 0) i.e. </p> <pre><code>at index 0 value of a = 1 &lt; value of b = 4 at index 1 value of a = 2 &lt; value of b = 5 at index 2 value of a = 3 &lt; value of b = 6 </code></pre> <p>If <code>a</code> were equal to <code>[1,2,7]</code>, then that would be incorrect because at index 2 value of <code>a</code> is greater than that of <code>b</code>. Also if <code>a</code>'s length were any smaller than that of <code>b</code>, it should be comparing only the indices of <code>a</code> with those of <code>b</code>.</p> <p>For example this pair <code>a</code>, <code>b</code></p> <pre><code>a = [1,2] b = [3,4,5] </code></pre> <p>at indices 0 and 1, value of <code>a</code> is smaller than <code>b</code>, thus this would also pass the check.</p> <p>P.S.--> I have to use the above conditions inside a <code>if</code> statement. And also, no element of <code>a</code> should be equal to that of <code>b</code> i.e. strictly lesser. Feel free to use as many as tools as you like. (Although I am using lists here, you can convert the above lists into numpy arrays too.)</p>
<p>Answering both parts with <code>zip</code> and <code>all</code></p> <pre><code>all(i &lt; j for (i, j) in zip(a, b)) </code></pre> <p><code>zip</code> will pair the values from the beginning of <code>a</code> with values from beginning of <code>b</code>; the iteration ends when the shorter iterable has run out. <code>all</code> returns <code>True</code> if and only if all items in a given are true in boolean context. Also, when any item fails, <code>False</code> will be returned early.</p> <p>Example results:</p> <pre><code>&gt;&gt;&gt; a = [1,2,3] &gt;&gt;&gt; b = [4,5,6] &gt;&gt;&gt; all(i &lt; j for (i, j) in zip(a, b)) True &gt;&gt;&gt; a = [1,2,7] &gt;&gt;&gt; b = [4,5,6] &gt;&gt;&gt; all(i &lt; j for (i, j) in zip(a, b)) False &gt;&gt;&gt; a = [1,2] &gt;&gt;&gt; b = [4,5,-10] &gt;&gt;&gt; all(i &lt; j for (i, j) in zip(a, b)) True </code></pre> <p>Timings with IPython 3.4.2:</p> <pre><code>In [1]: a = [1] * 10000 In [2]: b = [1] * 10000 In [3]: %timeit all(i &lt; j for (i, j) in zip(a, b)) 1000 loops, best of 3: 995 µs per loop In [4]: %timeit all(starmap(lt, zip(a, b))) 1000 loops, best of 3: 487 µs per loop </code></pre> <p>So the starmap is faster in this case. In general 2 things are relatively slow in Python: function calls and global name lookups. The <code>starmap</code> of <a href="https://stackoverflow.com/a/28767765/918959">Retard's solution</a> seems to win here exactly because the tuple yielded from <code>zip</code> can be fed as-is as the *args to the <code>lt</code> builtin function, whereas my code needs to deconstruct it.</p>
python|list|python-3.x|numpy|itertools
14
189
51,091,981
Making a numpy array from bytes packed with struct
<p>The following piece of python code:</p> <pre><code>import numpy as np import struct arr = [] arr.append(struct.pack('ii', 1, 3)) arr.append(struct.pack('ii', 2, 4)) dt = np.dtype([('n','i4'),('m','i4')]) a = np.array(arr,dt) print(a) </code></pre> <p>returns with <code>[(1, 3) (2, 4)]</code> (as I expected) under <code>Numpy</code> version <code>1.13.3</code> but under version <code>1.14.5</code> it fails with:</p> <pre><code>a = np.array(arr,dt) ValueError: invalid literal for int() with base 10: b'\x01\x00\x00\x00\x03\x00\x00\x00' </code></pre> <p>Is this a feature or a bug? I would like to get this to work under <code>1.14.5</code> as it does under <code>1.13.3</code> if possible.</p>
<p>You can get around this in <strong><code>1.14</code></strong> by using <strong><code>frombuffer</code></strong></p> <pre><code>&gt;&gt;&gt; np.frombuffer(np.array(arr), dt) array([(1, 3), (2, 4)], dtype=[('n', '&lt;i4'), ('m', '&lt;i4')]) </code></pre> <hr> <p>I <em>believe</em> this has to do with the <a href="https://docs.scipy.org/doc/numpy-1.14.0/release.html#deprecations" rel="nofollow noreferrer">changes to <strong><code>fromstring</code></strong></a> that appeared in <code>numpy 1.14</code>, although I would appreciate it if someone can verify.</p>
python|numpy
0
190
50,993,568
Any easier way to assign values of a DataFrame row to corresponding variables in a custom object?
<p>Let's say I have the following DataFrame with some sample rows:</p> <pre><code> id first_name last_name age 0 1 John Doe 18 1 2 Joe Shmuck 21 </code></pre> <p>Let's say I also have a custom Python class called <code>Person</code> which ought to represent the values of the DataFrame above. For convenience, the DataFrame's column names correspond exactly to the attributes of the class. </p> <pre><code>class Person: id first_name last_name age </code></pre> <p>I understand I can retrieve the values directly from a row (of a DataFrame) by providing the column index or the column name e.g: <code>df.iloc[0]['age']</code> however I want to have a slightly safer coding practice throughout my application and call <code>person.age</code> or even better a getter <code>person.get_age()</code>.</p> <p>The only, primitive way I'm doing is iterating through the columns of a row of my DataFrame, retrieving each cell and assigning them to the variables of new Person object one by one. e.g: <code>person.first_name = df.loc[0]['first_name']</code></p> <p>Is there a helpful tool which DataFrame, or Series, or any other Python library provides to streamline this? i.e. some wishful thinking like <code>person = df.loc[0].transform(type=Person)</code> </p>
<p>Do you really need a class for this? You can use <code>df.itertuples</code> to create a list of "Person" <code>namedtuple</code>s:</p> <pre><code>&gt;&gt;&gt; list(df.itertuples(index=False, name='Person')) </code></pre> <p></p> <pre><code>[Person(id=1, first_name='John', last_name='Doe', age=18), Person(id=2, first_name='Joe', last_name='Shmuck', age=21) ] </code></pre> <p>A namedtuple behaves a lot like a class in the sense that you can access its attributes (<code>p.age</code>, <code>p.id</code>, and so on).</p> <pre><code>for p in df.itertuples(index=False, name='Person'): print(p.first_name) John Joe </code></pre>
python|pandas|dataframe|series
1
191
51,010,662
Getting the adjugate of matrix in python
<p>i having some problems in solving the question finding the adjugate of a matrix by given the formula of cofactor matrix </p> <pre><code>c[i][j] = (-1)**(i+j)*m[i][j] </code></pre> <p>where m stand for determinant of matrix.</p> <pre><code>x = np.array([[1,3,5],[-2,-4,-5],[3,6,1]] , dtype = 'int') </code></pre> <p>i only able to do this and don't know how to continue , please help</p> <p>to find the cofactor i have this hint def COF(C) create an empty matrix CO</p> <pre><code> for row for col sel_rows = all rows except current row sel_columns = all cols except current col MATij = [selected rows and selected columns] compute COij return CO </code></pre>
<p>You can calculate the adjugate matrix by the transposal of the cofactor matrix with the method below which is suitable for non singular matrices. First, find the cofactor matrix, as follows: <a href="https://www.geeksforgeeks.org/how-to-find-cofactor-of-a-matrix-using-numpy/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/how-to-find-cofactor-of-a-matrix-using-numpy/</a> Then, find the transposal of the cofactor matrix.</p> <pre><code>import numpy as np import math as mth # get cofactors matrix def getcofat(x): eps = 1e-6 detx = np.linalg.det(x) if (mth.fabs(detx) &lt; eps): print(&quot;No possible to get cofactors for singular matrix with this method&quot;) x = None return x invx = np.linalg.pinv(x) invxT = invx.T x = invxT * detx return x # get adj matrix def getadj(x): eps = 1e-6 detx = np.linalg.det(x) if (mth.fabs(detx) &lt; eps): print(&quot;No possible to get adj matrix for singular matrix with this method&quot;) adjx = None return adjx cofatx = getcofat(x) adjx = cofatx.T return adjx A = np.array([[1, 3, 5], [-2, -4, -5], [3, 6, 1]]) print(A) print(np.linalg.det(A)) Acofat = getcofat(A) print(Acofat) Aadj = getadj(A) print(Aadj) </code></pre>
python|numpy
0
192
9,215,174
concatenate numpy arrays that are class instance attributes in python
<p>I am attempting to use a class that strings together several instances of another class as a numpy array of objects. I want to be able to concatenate attributes of the instances that are contained in the numpy array. I figured out a sloppy way to do it with a bunch of for loops, but I think there must be a more elegant, pythonic way of doing this. The following code does what I want, but I want to know if there is a cleaner way to do it:</p> <pre><code>import numpy as np class MyClass(object): def __init__(self): self.a = 37. self.arr = np.arange(5) class MyClasses(object): def __init__(self): self.N = 5 # number of MyClass instances to become attributes of this # class def make_subclas_arrays(self): self.my_class_inst = np.empty(shape=self.N, dtype="object") for i in range(self.N): self.my_class_inst[i] = MyClass() def concatenate_attributes(self): self.a = np.zeros(self.N) self.arr = np.zeros(self.N * self.my_class_inst[0].arr.size) for i in range(self.N): self.a[i] = self.my_class_inst[i].a slice_start = i * self.my_class_inst[i].arr.size slice_end = (i + 1.) * self.my_class_inst[i].arr.size self.arr[slice_start:slice_end] = ( self.my_class_inst[i].arr ) my_inst = MyClasses() my_inst.make_subclas_arrays() my_inst.concatenate_attributes() </code></pre> <p>Edit: Based on the response from HYRY, here is what the methods look like now:</p> <pre><code>def make_subclass_arrays(self): self.my_class_inst = np.array([MyClass() for i in range(self.N)]) def concatenate_attributes(self): self.a = np.hstack([i.a for i in self.my_class_inst]) self.arr = np.hstack([i.arr for i in self.my_class_inst]) </code></pre>
<p>you can use numpy.hstack() to concatenate arrays:</p> <pre><code>def concatenate_attributes(self): self.a = np.hstack([o.a for o in self.my_class_inst]) self.arr = np.hstack([o.arr for o in self.my_class_inst]) </code></pre> <h2>See Also</h2> <p>vstack : Stack arrays in sequence vertically (row wise). dstack : Stack arrays in sequence depth wise (along third axis). concatenate : Join a sequence of arrays together.</p>
python|class|attributes|numpy|string-concatenation
1
193
66,371,667
How to normalize image in tensorflow.js?
<p>I applied transformation during training phase in pytorch then I convert my model to run in tensorflow.js. It is working fine but got wrong predictions as I didn't apply same transformation.</p> <pre><code>test_transform = torchvision.transforms.Compose([ torchvision.transforms.Resize(size=(224, 224)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) </code></pre> <p>I am able to resize image but not able to normalize. how can I do that?</p> <p>Update:-</p> <pre class="lang-js prettyprint-override"><code>&lt;script src=&quot;https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;script&gt; {% load static %} async function load_model(){ const model = await tf.loadGraphModel(&quot;{% static 'disease_detection/tfjs_model_2/model.json' %}&quot;); console.log(model); return model; } function loadImage(src){ return new Promise((resolve, reject) =&gt; { const img = new Image(); img.src = src; img.onload = () =&gt; resolve(tf.browser.fromPixels(img, 3)); img.onerror = (err) =&gt; reject(err); }); } function resizeImage(image) { return tf.image.resizeBilinear(image, [224, 224]).sub([0.485, 0.456, 0.406]).div([0.229, 0.224, 0.225]); } function batchImage(image) { const batchedImage = image.expandDims(0); //const batchedImage = image; return batchedImage.toFloat(); } function loadAndProcessImage(image) { //const croppedImage = cropImage(image); const resizedImage = resizeImage(image); const batchedImage = batchImage(resizedImage); return batchedImage; } let model = load_model(); model.then(function (model_param){ loadImage('{% static 'disease_detection/COVID-19 (97).png' %}').then(img=&gt;{ let imge = loadAndProcessImage(img); const t4d = tf.tensor4d(Array.from(imge.dataSync()),[1,3,224,224]) console.log(t4d.dataSync()); let prediction = model_param.predict(t4d); let v = prediction.argMax().dataSync()[0] console.log(v) }) }) </code></pre> <p>I tried this code but it is not normalizing image properly.</p>
<ul> <li><strong>torchvision.transforms.ToTensor()</strong> converts PIL Image or numpy array in the range of 0 to 255 to a float tensor os shape (channels x Height x Width) in the range 0.0 to 1.0 . To convert in the range 0.0 to 1.0 it divide each element of tensor by 255. So, execute same in tensorflowJS I done as follows -</li> </ul> <pre class="lang-js prettyprint-override"><code>img = tf.image.resizeBilinear(img, [224, 224]).div(tf.scalar(255)) img = tf.cast(img, dtype = 'float32'); </code></pre> <ul> <li><strong>torchvision.transforms.Normalize()</strong> normalize a tensor image with mean and standard deviation. Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel] . I didn't find any such function in tensorflowJS. So, I seperately normalized each channel and combined them again.</li> </ul> <p>Complete function is as follows -</p> <pre class="lang-js prettyprint-override"><code>function imgTransform(img){ img = tf.image.resizeBilinear(img, [224, 224]).div(tf.scalar(255)) img = tf.cast(img, dtype = 'float32'); /*mean of natural image*/ let meanRgb = { red : 0.485, green: 0.456, blue: 0.406 } /* standard deviation of natural image*/ let stdRgb = { red: 0.229, green: 0.224, blue: 0.225 } let indices = [ tf.tensor1d([0], &quot;int32&quot;), tf.tensor1d([1], &quot;int32&quot;), tf.tensor1d([2], &quot;int32&quot;) ]; /* sperating tensor channelwise and applyin normalization to each chanel seperately */ let centeredRgb = { red: tf.gather(img,indices[0],2) .sub(tf.scalar(meanRgb.red)) .div(tf.scalar(stdRgb.red)) .reshape([224,224]), green: tf.gather(img,indices[1],2) .sub(tf.scalar(meanRgb.green)) .div(tf.scalar(stdRgb.green)) .reshape([224,224]), blue: tf.gather(img,indices[2],2) .sub(tf.scalar(meanRgb.blue)) .div(tf.scalar(stdRgb.blue)) .reshape([224,224]), } /* combining seperate normalized channels*/ let processedImg = tf.stack([ centeredRgb.red, centeredRgb.green, centeredRgb.blue ]).expandDims(); return processedImg; } </code></pre>
javascript|tensorflow|deep-learning|tensorflow.js
4
194
66,734,739
My Dataframe contains 500 columns, but I only want to pick out 27 columns in a new Dataframe. How do I do that?
<p>My Dataframe contains 500 columns, but I only want to pick out 27 columns in a new Dataframe. How do I do that?</p> <p>I used query() but output TypeError: query() takes from 2 to 3 positional arguments but 27 were given</p>
<p>If you want to select the columns based on their name, you can do the following:</p> <pre><code>df_new = df[[&quot;colA&quot;, &quot;colB&quot;, &quot;colC&quot;, ...]] </code></pre> <p>or use the &quot;filter&quot; function:</p> <pre><code>df_new = df.filter([&quot;colA&quot;, &quot;colB&quot;, &quot;colC&quot;, ..]) </code></pre> <p>In case that your column selection is based on the index of columns:</p> <pre><code>df_new = df.iloc[:, 0:27] # if columns are consecutive df_new = df.iloc[:, [0,2,10,..]] # if columns are not consecutive (the numbers refer to the column indices) </code></pre>
pandas
0
195
66,696,489
Python Panda : Count number of occurence of a number
<p>I've searched for long time and I need your help, I'm newbie on python and panda lib. I've a dataframe like that charged from a csv file :</p> <pre><code>ball_1,ball_2,ball_3,ball_4,ball_5,ball_6,ball_7,extraball_1,extraball_2 10,32,25,5,8,19,21,3,4 43,12,8,19,4,37,12,1,5 12,16,43,19,4,28,40,2,4 </code></pre> <p>ball_X is an int in between 1-50 and extraball_X is an int between 1-9. I want count how many times appear each number in 2 other frames like that : First DF ball :</p> <pre><code>Number,Score 1,128 2,34 3,12 4,200 .... 50,145 </code></pre> <p>Second DF extraball :</p> <pre><code>Number,Score 1,340 2,430 3,123 4,540 .... 9,120 </code></pre> <p>I've the algorythme in my head but i'm too noob in panda to translate into code. I Hope it's clear enough and someone will be able to help me. Dont hesitate if you have questions.</p>
<h3><code>groupby</code> on <code>columns</code> with <code>value_counts</code></h3> <pre><code>def get_before_underscore(x): return x.split('_', 1)[0] val_counts = { k: d.stack().value_counts() for k, d in df.groupby(get_before_underscore, axis=1) } print(val_counts['ball']) 12 3 19 3 4 2 8 2 43 2 32 1 5 1 10 1 37 1 40 1 16 1 21 1 25 1 28 1 dtype: int64 print(val_counts['extraball']) 4 2 1 1 2 1 3 1 5 1 dtype: int64 </code></pre>
python|pandas|data-science
3
196
66,726,869
Group a numpy array
<p>I have an one-dimensional array <code>A</code>, such that <code>0 &lt;= A[i] &lt;= 11</code>, and I want to map <code>A</code> to an array <code>B</code> such that</p> <pre><code>for i in range(len(A)): if 0 &lt;= A[i] &lt;= 2: B[i] = 0 elif 3 &lt;= A[i] &lt;= 5: B[i] = 1 elif 6 &lt;= A[i] &lt;= 8: B[i] = 2 elif 9 &lt;= A[i] &lt;= 11: B[i] = 3 </code></pre> <p>How can implement this efficiently in <code>numpy</code>?</p>
<p>You need to use an int division by <code>//3</code>, and that is the most performant solution</p> <pre><code>A = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) B = A // 3 print(A) # [0 1 2 3 4 5 6 7 8 9 10 11] print(B) # [0 0 0 1 1 1 2 2 2 3 3 3] </code></pre>
python|arrays|numpy|matrix
2
197
66,609,544
Obtaining a list of x,y coordinates of a specific RGB value from a screenshot of the screen
<p>I have been trying to take a screenshot of my screen and find every x,y coordinate of a specific color.</p> <pre><code>from PIL import ImageGrab import numpy as np image = ImageGrab.grab() indices = np.all(image == (209, 219, 221), axis=-1) print(indices) print(zip(indices[0], indices[1])) </code></pre> <p>When I run my code, I receive one coordinate and then an error message.</p> <pre><code>(1126, 555) [1126, 555] False print(zip(indices[0], indices[1])) IndexError: invalid index to scalar variable. </code></pre> <p>How come it isn't working? The color is on-screen.</p>
<p>I believe you're making an error with the following line:</p> <pre><code>indices = np.all(image == (209, 219, 221), axis=-1) </code></pre> <p>You can iterate over the pixels directly and achieve the result you want:</p> <pre><code>from PIL import ImageGrab import numpy as np image = ImageGrab.grab() color = (43, 43, 43) indices = [] width, height = image.size for x in range(width): for y in range(height): if image.getpixel((x, y)) == color: indices.append((x, y)) print(indices) </code></pre>
python|image|numpy|image-processing|rgb
0
198
66,686,745
When im converting the predicted value this gives me IndexError: invalid index to scalar variable
<pre><code>@app.route('/predict', methods=['GET', 'POST']) def upload(): if request.method == 'POST': # Get the file from post request f = request.files['file'] # Save the file to ./uploads basepath = os.path.dirname(__file__) file_path = os.path.join( basepath, 'uploads', secure_filename(f.filename)) f.save(file_path) # Make prediction preds = model_predict(file_path, model) print('make predict', preds) # Process your result for human pred_class = preds.argmax(axis=-1) # Simple argmax print(pred_class) # pred_class = decode_predictions(preds, top=1) # ImageNet Decode result = str(pred_class[0][0][1]) # Convert to string return result return None </code></pre> <p>result = str(pred_class[0][0][1]) # Convert to string IndexError: invalid index to scalar variable. 127.0.0.1 - - [18/Mar/2021 13:10:07] &quot;POST /predict HTTP/1.1&quot; 500 -</p>
<p>If <code>make_predict = [[0. 1. 0. 0. 0. 0. 0.]]</code>, than you try to access the first index of the integer <code>0</code> that you get from <code>pred_class[0][0]</code>, so by removing the redundant indexer in <code>result = str(pred_class[0][0][0])</code> and changing it to <code>result = str(pred_class[0][0]) # Convert to string</code> should fix the problem.</p> <p>Cheers.</p>
python|pandas|numpy|tensorflow|keras
0
199
66,719,264
Create a dataframe from multiple list of dictionary values
<p>I have a code as below,</p> <pre><code>safety_df ={} for key3,safety in analy_df.items(): safety = pd.DataFrame({&quot;Year&quot;:safety['index'], '{}'.format(key3)+&quot;_CR&quot;:safety['CURRENT'], '{}'.format(key3)+&quot;_ICR&quot;:safety['ICR'], '{}'.format(key3)+&quot;_D/E&quot;:safety['D/E'], '{}'.format(key3)+&quot;_D/A&quot;:safety['D/A']}) safety_df[key3] = safety </code></pre> <p>Here in this code I'm extracting values from another dictionary. It will looping through the various companies that why I named using format in the key. The output contains above 5 columns for each company(Year,CR, ICR,D/E,D/A).</p> <p>Output which is printing out is with plenty of NA values where after Here I want common column which is year for all companies and print following columns which is C1_CR, C2_CR, C3_CR, C1_ICR, C2_ICR, C3_ICR,...C3_D/A ..</p> <p>I tried to extract using following code,</p> <pre><code>pd.concat(safety_df.values()) </code></pre> <p>Sample output of this..</p> <p><a href="https://i.stack.imgur.com/s8v8f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s8v8f.png" alt="enter image description here" /></a></p> <p>Here it extracts values for each list, but NA values are getting printed out because of for loops?</p> <p>I also tried with groupby and it was not worked?..</p> <p>How to set Year as common column, and print other values side by side.</p> <p>Thanks</p>
<p>Use <code>axis=1</code> to concate along the columns:</p> <pre><code>import numpy as np import pandas as pd years = np.arange(2010, 2021) n = len(years) c1 = np.random.rand(n) c2 = np.random.rand(n) c3 = np.random.rand(n) frames = { 'a': pd.DataFrame({'year': years, 'c1': c1}), 'b': pd.DataFrame({'year': years, 'c2': c2}), 'c': pd.DataFrame({'year': years[1:], 'c3': c3[1:]}), } for key in frames: frames[key].set_index('year', inplace=True) df = pd.concat(frames.values(), axis=1) print(df) </code></pre> <p>which results in</p> <pre><code> c1 c2 c3 year 2010 0.956494 0.667499 NaN 2011 0.945344 0.578535 0.780039 2012 0.262117 0.080678 0.084415 2013 0.458592 0.390832 0.310181 2014 0.094028 0.843971 0.886331 2015 0.774905 0.192438 0.883722 2016 0.254918 0.095353 0.774190 2017 0.724667 0.397913 0.650906 2018 0.277498 0.531180 0.091791 2019 0.238076 0.917023 0.387511 2020 0.677015 0.159720 0.063264 </code></pre> <p>Note that I have explicitly set the index to be the 'year' column, and in my example, I have removed the first year from the 'c' column. This is to show how the indices of the different dataframes are matched when concatenating. Had the index been left to its standard value, you would have gotten the years out of sync and a NaN value at the bottom of column 'c' instead.</p>
python|pandas
1