markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Exercise 8 8.1. What is the scriptSig from the second input in this tx? 8.2. What is the scriptPubKey and amount of the first output in this tx? 8.3. What is the amount for the second output?```010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0da57e85eec2934a82a585ea337ce2f4998b50ae699dd79f5880e253dafafb7feffffffeb8f51f4038dc17e6313cf831d4f02281c2a468bde0fafd37f1bf882729e7fd3000000006a47304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a7160121035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937feffffff567bf40595119d1bb8a3037c356efd56170b64cbcc160fb028fa10704b45d775000000006a47304402204c7c7818424c7f7911da6cddc59655a70af1cb5eaf17c69dadbfc74ffa0b662f02207599e08bc8023693ad4e9527dc42c34210f7a7d1d1ddfc8492b654a11e7620a0012102158b46fbdff65d0172b7989aec8850aa0dae49abfb84c81ae6e5b251a58ace5cfeffffffd63a5e6c16e620f86f375925b21cabaf736c779f88fd04dcad51d26690f7f345010000006a47304402200633ea0d3314bea0d95b3cd8dadb2ef79ea8331ffe1e61f762c0f6daea0fabde022029f23b3e9c30f080446150b23852028751635dcee2be669c2a1686a4b5edf304012103ffd6f4a67e94aba353a00882e563ff2722eb4cff0ad6006e86ee20dfe7520d55feffffff0251430f00000000001976a914ab0c0b2e98b1ab6dbf67d4750b0a56244948a87988ac005a6202000000001976a9143c82d7df364eb6c75be8c80df2b3eda8db57397088ac46430600```
# Exercise 8.1/8.2/8.3 from io import BytesIO from tx import Tx hex_transaction = '010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0da57e85eec2934a82a585ea337ce2f4998b50ae699dd79f5880e253dafafb7feffffffeb8f51f4038dc17e6313cf831d4f02281c2a468bde0fafd37f1bf882729e7fd3000000006a47304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a7160121035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937feffffff567bf40595119d1bb8a3037c356efd56170b64cbcc160fb028fa10704b45d775000000006a47304402204c7c7818424c7f7911da6cddc59655a70af1cb5eaf17c69dadbfc74ffa0b662f02207599e08bc8023693ad4e9527dc42c34210f7a7d1d1ddfc8492b654a11e7620a0012102158b46fbdff65d0172b7989aec8850aa0dae49abfb84c81ae6e5b251a58ace5cfeffffffd63a5e6c16e620f86f375925b21cabaf736c779f88fd04dcad51d26690f7f345010000006a47304402200633ea0d3314bea0d95b3cd8dadb2ef79ea8331ffe1e61f762c0f6daea0fabde022029f23b3e9c30f080446150b23852028751635dcee2be669c2a1686a4b5edf304012103ffd6f4a67e94aba353a00882e563ff2722eb4cff0ad6006e86ee20dfe7520d55feffffff0251430f00000000001976a914ab0c0b2e98b1ab6dbf67d4750b0a56244948a87988ac005a6202000000001976a9143c82d7df364eb6c75be8c80df2b3eda8db57397088ac46430600' # bytes.fromhex to get the binary representation # create a stream using BytesIO() # Tx.parse() the stream # print tx's second input's scriptSig # print tx's first output's scriptPubKey # print tx's second output's amount
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Sequences of Multi-labelled dataEarlier we have examined the notion that documents can the thought of a sequence of tokens along with a mapping from a set of labels to these tokens. Ideas like stemming and lemmatization are linguistic methods for applying different label mappings to these token sequences. An example of this might be the sequence of tokens assocated with the sentence:"the cats sat in the cat box"This can be decomposed into the sequence of tokens:[the, cats, sat, in, the, cat, box]The notion of token and label are often confounded but it can be useful to keep them separated mentally. Here we a sequence of six tokens _token_1_ through _token_6_ along with a single label assocated with each token. In this case _'the' $\mapsto$ token_1_ and _'the' $\mapsto$ token_5_. They are different tokens with the same label.We take advantage of these shared labels in our TokenCooccurrenceVectorizer where we embed token labels into a vector space based on the locality of their associated tokens within our sequence. That is two labels are similar if their surrounding tokens share many common labels. This is covered in more explicite detail in:[TODO: Add Link to TokenCooccurrenceVectorizer readme](https://noteboook_link_here)That said there is nothing which necessitates that we associate a single label with each token. In fact, as we mentioned previously, stemming can be thought of as a way of contracting the label space by contracting a set of labels such as _(tabby, feline, cat, cats)_ to a single token label _(cat)_. Traditionally in NLP, we would replace the label mapping for all the tokens previously associated with various feline tokens with our new canonical token _cat_. This has the twin advantages of simplifying our label space, making it easier to analyse, and seeing more examples of _cat_ in our text, potentially improving the embedding of this label by providing more contexts in which it was used. If we didn't care about simplifying our token space we could, in fact, get the second benefit without the first. Instead of replacing our label mapping to each of these tokens we could instead map from a set of labels to each token instead of from a single labels. Thus, in our previous example of *"the cats sat in the cat box"* _{cat, cats} $\mapsto$ token_2_. One oddity this introduces is that the contex of _sat_ now has both the labels _cat_ and _cats_ occuring at a distance of 1 from it. This can become problematic if taken to extreme levels in that that very wide context windows when combined with large label sets could cause a combinatorial explosion. When used in moderation this can be a powerful and quite useful technique in that it allows us a great deal of freedom when moving beyond the text domain to other sequence domains. The other oddity that this can introduce is that the labels _cat_ and _cats_ now cooccur within a context window at distance zero from each other. This has the nice property of encoding the label similarity that has been specified by our mapping int our embedding. A great example of the usefulness of this multi-label framework occurs when looking at the cyber defense domain. The [Operationally Transparent Cyber (OpTC)](https://github.com/FiveDirections/OpTC-data) data is a DARPA released data set that examines the sequence of computer events on a small network of computers and asks researchers to search for known malicious activity within these events. More details on this dataset, it's content and utility for cyber defense analysis can be found in the recent paper [Analyzing the Usefulness of the DARPA OpTCDataset in Cyber Threat Detection Research](https://arxiv.org/pdf/2103.03080.pdf). The interesting thing for us to note is that this data was described as a **"sequence of events"**. What differentiates cyber **events** from those **tokens** we spoke about in NLP? The short answer is nothing other than the fact that they can't be easily represented by a single label. Now we have a framework for representing such multi-labelled tokens or events. Let's see what that might look like in practice.Cyber events come in a variety of flavours. The most common being FLOW events. Flow event represent data being transmitted between two computers. They are commonly summarized via a set of descriptions such as:[process, source_IP, source_port, destination_IP, destination_port, protocal, time_stamp]Here a process instantiated a connection from a source IP address and port to a destination IP address and port and sent data over that connection at a particular time.Previously it might have been difficult to think about how we might apply something like a TokenCooccurrenceVectorizer to this sort of data but with our new notion of muli-labelled tokens we quickly realize that flow events are really just tokens with interesting labels associated with them and a sequence induced via our time_stamps. This should allow us to embed these labels into a useful vector space with similar tokens being placed near other tokens that appear within the same event and have similar preceeding and following events.Let's walk through a few increasingly more complex examples. Import Some librariesWe'll need TokenCooccurrenceVectorizer from our vectorizers library along with a few helper functions for dealing with our data and plotting it.
from vectorizers import TokenCooccurrenceVectorizer from vectorizers.utils import flatten import pandas as pd import numpy as np import umap import umap.plot
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
We'll add some bokeh imports for easy interactive plots
import bokeh.io bokeh.io.output_notebook()
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
Let's fetch some dataThe OpTC data is a bit difficult to pull and parse into easy to process formats. I will leave that as an excercise to the reader. A colleague of ours has pulled this data and restructured it into parquet files distributed across a nice file structure but that is outside of the scope of this notebook. For the purposes of this notebook we will load simple pandas data frames that were derived from this events data. Each event is a row in one of these data frames and we have wide set of columns representing all of the summary labels that one might use to represent our various types of cyber events.In order for this example to be easily reproducable on a reasonably small machine we we limit our initial analysis to one days worth of FLOW MESSAGE data on a single host. This is for demonstration purposes only and a useful analysis of this data should be broadened to incorporate more of this data.
flows_onehost_oneday = pd.read_csv("optc_flows_onehost_oneday.csv") flows_onehost_oneday.shape flows_onehost_oneday.columns
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
You'll notice that we have just over a million events that are being desribed by a wide variety of descriptive columns. Since we've limited our data to network flow data many of these columns aren't populated for this particular data set. For a more detailed description of this data and these fields I point a reader to the paper we mentioned earlier, [Analyzing the Usefulness of the DARPA OpTCDataset in Cyber Threat Detection Research](https://arxiv.org/pdf/2103.03080.pdf). For the purposes of this notebook we are most interested in:[process, source_IP, source_port, destination_IP, destination_port, protocal, time_stamp]In this data these correspond to the fields:['image_path', 'src_ip', 'src_port','dest_ip', 'dest_port', 'l4protocol', 'timestamp']
flow_variables = ['image_path', 'src_ip', 'src_port','dest_ip', 'dest_port', 'l4protocol'] categorical_variables = flow_variables sort_by = ['timestamp']
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
Restructure our dataNow we need to restructure this data into a format for easy consumption via our TokenCooccurrenceVectorizer.We will convert each row of our data_frame into a sequence of multi-labelled events. To do that we'll need to convert from a list of categorical column values into a a list of labels. An easy way to define a label associated with a categorical column is in the form of the string f'{column_name}:{value}'. We'll first ensure that our events are propperyl ordered by time
flows_sorted = flows_onehost_oneday.sort_values(by = 'timestamp')
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
Now we limit ourselves to the columns of interest for these particular events.
flows_df = flows_sorted[flows_sorted.columns.intersection(categorical_variables)] flows_df.shape flows_df.head(3)
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
Now we'll quickly iterate through this dataframe and into our list of lists format.
def categorical_columns_to_list(data_frame, column_names): """ Takes a data frame and a set of columns and represents each row a list of the appropriate non-empty columns of the form column_name:value. """ label_list = pd.Series([[f'{k}:{v}' for k, v in zip(column_names, t) if v is not None] for t in zip(*map(data_frame.get, column_names))]) return label_list flow_labels = categorical_columns_to_list(flows_df, categorical_variables) len(flow_labels) flow_labels[0]
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
TokenCooccurrenceVectorizerWe initially only embed labels that occur at least 20 times within our days events. This prevents us from attempting to embed labels that we have very limited data for. We will initally select a window_radii=2 in order to include some very limited sequence information. The presumption here is that the flow messages that occurred near each other in the sequence of flow events are related to each other.Lastly we set multi_labelled_tokens=True to convey that we are dealing with a sequence of multi-labelled events.
word_vectorizer = TokenCooccurrenceVectorizer( min_occurrences= 20, window_radii=2, multi_labelled_tokens=True).fit(flow_labels) word_vectors = word_vectorizer.reduce_dimension() print(f"This constructs an embedding of {word_vectorizer.cooccurrences_.shape[0]} labels represented by their", f"cooccurrence with {word_vectorizer.cooccurrences_.shape[1]} labels occurring before and after them.\n", f"We have then reduced this space to a {word_vectors.shape[1]} dimensional representation.")
This constructs an embedding of 15014 labels represented by their cooccurrence with 30028 labels occurring before and after them. We have then reduced this space to a 150 dimensional representation.
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
For the purposes of visualization we will use our UMAP algorithm to embed this data into two dimensional space.
model = umap.UMAP(n_neighbors=30, metric='cosine', unique=True, random_state=42).fit(word_vectors) hover_df = pd.DataFrame({'label':word_vectorizer.token_label_dictionary_.keys()}) event_type = hover_df.label.str.split(':', n=1, expand=True) event_type.columns = ['type','value'] umap.plot.points(model, theme='fire', labels=event_type['type']);
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
A little exploration of this space quickly reveals that our label space is overwhelmed by source ports (and some destination ports with values in the range 40,000 to 60,000. A little consultation with subject matter experts quickly reveals that these are so called ephemeral ports. That is a pre-established range of ports that are used to establish temporary connections and then thrown away to be re-used by other processes later. The disposable nature of these ports explains why there is such a plethora of them within our label space. In fact what we see here are clusters of these ports that are all used by the same process and IP pairs over the course of our day. Though it's encouraging that we can easily detect this structure it is essentially meaningless structure since it tells us nothing about the flows or processes and is completely unstable over any period of time. As such we will want to remove these tokens from our space.Fortunately TokenCooccurrenceVectorizer() has an exclude_token_regex parameter which will allow us to remove these tokens with very little work.
word_vectorizer = TokenCooccurrenceVectorizer( min_occurrences= 20, window_radii=2, excluded_token_regex='(src\_port|dest\_port):[4-6][0-9]{4}', multi_labelled_tokens=True).fit(flow_labels) word_vectors = word_vectorizer.reduce_dimension() print(f"This constructs an embedding of {word_vectorizer.cooccurrences_.shape[0]} labels represented by their", f"cooccurrence with {word_vectorizer.cooccurrences_.shape[1]} labels occurring before and after them.\n", f"We have then reduced this space to a {word_vectors.shape[1]} dimensional representation.")
This constructs an embedding of 3245 labels represented by their cooccurrence with 6490 labels occurring before and after them. We have then reduced this space to a 150 dimensional representation.
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
As before we'll reduce this 150 dimensional representation to a two dimensional representation for visualization and exploration. Since we are already using subject matter knowledge to enrich our analysis we will continue in this vein and label our IP addresses with whether they are internal or external addresses. Internal IP addresses are of the form 10.\*.\*.\*.
model = umap.UMAP(n_neighbors=30, metric='cosine', unique=True, random_state=42).fit(word_vectors) hover_df = pd.DataFrame({'label':word_vectorizer.token_label_dictionary_.keys()}) internal_bool = hover_df.label.str.contains("ip:10\.") event_type = hover_df.label.str.split(':', n=1, expand=True) event_type.columns = ['type','value'] event_type['type'][internal_bool] = event_type['type'][internal_bool] + '_internal' umap.plot.points(model, theme='fire', labels=event_type['type']);
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
This provides a nice structure over our token label space. We see and can interesting mixtures of internal and external source IP spaces with connections making use of specific source and destination ports seperating off nicely into their own clusters.The next step would be to look at your data by building an interactive plot and starting to exploring these clusters in earnest.
p = umap.plot.interactive(model, theme='fire', labels=event_type['type'], hover_data=hover_df, point_size=3, width=600, height=600) umap.plot.show(p)
_____no_output_____
BSD-3-Clause
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
Numpy" NumPy is the fundamental package for scientific computing with Python. It contains among other things:* a powerful N-dimensional array object* sophisticated (broadcasting) functions* useful linear algebra, Fourier transform, and random number capabilities "-- From the [NumPy](http://www.numpy.org/) landing page. Before learning about numpy, we introduce.. The NXOR FunctionMany of the exercises involve working with the $\mathrm{NXOR} \colon \; [-1, 1]^2 \rightarrow \{-1, +1\}$ function defined as $$ (x_1, x_2) \longmapsto \mathrm{sgn}(x_1 \cdot x_2) .$$where for $x_1 \cdot x_2 = 0$ we let $\mathrm{NXOR}(x_1, x_2) = -1$.We can visualize this function as![A set of points in \[-1, +1\]^2 with green and red markers denoting the value assigned to them by the NXOR function](https://github.com/tmlss2018/PracticalSessions/blob/master/assets/nxor_labels.png?raw=true)where each point in $ [-1, 1]^2$ is marked by green (+1) or red (-1) according to the value assigned to it by the NXOR function. Over the course of the intro lab exercises we will1. Generate such data with numpy.2. Create the plot above with matplotlib.3. Train a model to learn this function. Setup and imports. Run the following cell.
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Random numbers in numpy
np.random.random((3, 2)) # Array of shape (3, 2), entries uniform in [0, 1).
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Note that (as usual in computing) numpy produces pseudo-random numbers based on a seed, or more precisely a random state. In order to make random sequences and calculations based on reproducible, use* the [`np.random.seed()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.seed.html) function to set the default global seed, or* the [`np.random.RandomState`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.RandomState.html) class which is a container for a pseudo-random number generator and exposes methods for generating random numbers.
np.random.seed(0) print(np.random.random(2)) # Reset the global random state to the same state. np.random.seed(0) print(np.random.random(2))
[0.5488135 0.71518937] [0.5488135 0.71518937]
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Numpy Array Operations 1There are a large number of operations you can run on any numpy array. Here we showcase some common ones.
# Create one from hard-coded data: ar = np.array([ [0.0, 0.2], [0.9, 0.5], [0.3, 0.7], ], dtype=np.float64) # float64 is the default. print('The array:\n', ar) print() print('data type', ar.dtype) print('transpose\n', ar.T) print('shape', ar.shape) print('reshaping an array', ar.reshape((6)))
The array: [[ 0. 0.2] [ 0.9 0.5] [ 0.3 0.7]] data type float64 transpose [[ 0. 0.9 0.3] [ 0.2 0.5 0.7]] shape (3, 2) reshaping an array [ 0. 0.2 0.9 0.5 0.3 0.7]
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Many numpy operations are available both as np module functions as well as array methods. For example, we can also reshape as
print('reshape v2', np.reshape(ar, (6, 1)))
reshape v2 [[ 0. ] [ 0.2] [ 0.9] [ 0.5] [ 0.3] [ 0.7]]
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Numpy Indexing and selectorsHere are some basic indexing examples from numpy.
ar ar[0, 1] # row, column ar[:, 1] # slices: select all elements across the first (0th) axis. ar[1:2, 1] # slices with syntax from:to, selecting [from, to). ar[1:, 1] # Omit `to` to go all the way to the end ar[:2, 1] # Omit `from` to start from the beginning ar[0:-1, 1] # Use negative indexing to count elements from the back.
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
We can also pass boolean arrays as indices. These will exactly define which elements to select.
ar[np.array([ [True, False], [False, True], [True, False], ])]
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Boolean arrays can be created with logical operations, then used as selectors. Logical operators apply elementwise.
ar_2 = np.array([ # Nearly the same as ar [0.0, 0.1], [0.9, 0.5], [0.0, 0.7], ]) # Where ar_2 is smaller than ar, let ar_2 be -inf. ar_2[ar_2 < ar] = -np.inf ar_2
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Numpy Operations 2
print('array:\n', ar) print() print('sum across axis 0 (rows):', ar.sum(axis=0)) print('mean', ar.mean()) print('min', ar.min()) print('row-wise min', ar.min(axis=1))
array: [[ 0. 0.2] [ 0.9 0.5] [ 0.3 0.7]] sum across axis 0 (rows): [ 1.2 1.4] mean 0.433333333333 min 0.0 row-wise min [ 0. 0.5 0.3]
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
We can also take element-wise minimums between two arrays.We may want to do this when "clipping" values in a matrix, that is, setting any values larger than, say, 0.6, to 0.6. We would do this in numpy with.. Broadcasting (and selectors)
np.minimum(ar, 0.6)
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Numpy automatically turns the scalar 0.6 into an array the same size as `ar` in order to take element-wise minimum. Broadcasting can save us a lot of typing, but in complicated cases it may require a good understanding of the exact rules followed.Some references:* [Numpy page that explains broadcasting](https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html)* [Similar content with some visualizations](http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc)Here we follow with a selection of other useful broadcasting examples.
# Centering our array. print('centered array:\n', ar - np.mean(ar))
centered array: [[-0.43333333 -0.23333333] [ 0.46666667 0.06666667] [-0.13333333 0.26666667]]
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Note that `np.mean()` was a scalar, but it is automatically subtracted from every element. We can write the minimum function ourselves, as well.
clipped_ar = ar.copy() # So that ar is not modified. clipped_ar[clipped_ar > 0.6] = 0.6 clipped_ar
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
A few things happened here:1. 0.6 was broadcast in for the greater than (>) operation2. The greater than operation defined a selector, selecting a subset of the elements of the array3. 0.6 was broadcast to the right number of elements for assignment. Vectors may also be broadcast into matrices.
vec = np.array([1, 2]) ar + vec
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Here the shapes of the involved arrays are:```ar (2d array): 2 x 2vec (1d array): 2Result (2d array): 2 x 2```When either of the dimensions compared is one (even implicitly, like in the case of `vec`), the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other.Here, this meant that the `[1, 2]` row was repeated to match the number of rows in `ar`, then added together. If there is a shape mismatch, you will be informed. To try, uncomment the line below and run it.
#ar + np.array([[1, 2, 3]])
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
ExerciseBroadcast and add the vector `[10, 20, 30]` across the columns of `ar`. You should get ```array([[10. , 10.2], [20.9, 20.5], [30.3, 30.7]]) ```
#@title Code # Recall that you can use vec.shape to verify that your array has the # shape you expect. ### Your code here ### #@title Solution vec = np.array([[10], [20], [30]]) ar + vec
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
`np.newaxis`We can use another numpy feature, `np.newaxis` to simply form the column vector that was required for the example above. It adds a singleton dimension to arrays at the desired location:
vec = np.array([1, 2]) vec.shape vec[np.newaxis, :].shape vec[:, np.newaxis].shape
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Now you know more than enough to generate some example data for our `NXOR` function. Exercise: Generate Data for NXORWrite a function `get_data(num_examples)` that returns two numpy arrays* `inputs` of shape `num_examples x 2` with points selected uniformly from the $[-1, 1]^2$ domain.* `labels` of shape `num_examples` with the associated output of `NXOR`.
#@title Code def get_data(num_examples): # Replace with your code. return np.zeros((num_examples, 2)), np.zeros((num_examples)) #@title Solution # Solution 1. def get_data(num_examples): inputs = 2*np.random.random((num_examples, 2)) - 1 labels = np.prod(inputs, axis=1) labels[labels <= 0] = -1 labels[labels > 0] = 1 return inputs, labels # Solution 1. # def get_data(num_examples): # inputs = 2*np.random.random((num_examples, 2)) - 1 # labels = np.sign(np.prod(inputs, axis=1)) # labels[labels == 0] = -1 # return inputs, labels get_data(4)
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
That's all, folks!For now.
_____no_output_____
MIT
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
Stack Overflow Developer Surveys, 2015-2019
# global printing options pd.options.display.max_columns = 100 pd.options.display.max_rows = 30
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
Questions explored:1. Which are the current most commonly used programming languages?2. How has the prevalance of different programming languages changed throughout the past **five????** years?3. Which programming languages are the currently the most popular for specific types of developers?Poss questions:- mode of education + diff lang/frameworks/plats?- years of experience + diff lang/frameworks/plats? Challenges:As is often the case with the practicalities of real-life data, the Stack Overflow developer survey varies each year, presenting unique challenges to making cross-year comparisons. 1. The same languages are classified differently from year-to-year. For instance, HTML and CSS are combined under one category in the 2019 survey, categorized separately in the 2018 survey, and nonexistent in 2017 and prior surveys.2. The question in 2017 covers "technologies that you work with", including languages, databases, platforms, and frameworks. The 2018 and 2019 surveys thankfully separated these different variables, but that still means more cleaning for the 2017 dataset!3. The addition of an "Others" category in 2019 that replaces the most obscure entries from earlier years. For consistency across years, I opted to combine the obscure languages from before 2019 into a single category "Other(s)". Problem variables:- HTML/CSS for 2019, 2018 has HTML and CSS separately.- Bash/Shell/PowerShell for 2019, 2018 has Bash/Shell- 2019 has an "Other" category End goal - create a line graph of prevalence of languages across different years2015- [x] clean names of 2015 data- [ ] merge all visual basic under 'Visual Basic / VBA'- [ ] all years have "Other(s)" as a category- [ ] delete HTML/CSS from 2018+19- [ ] delete non-language categories from 2017 and prior- [ ] uniform Shell/Powershell category- [ ] chart with languages and years 2019: LanguageWorkedWith2018: LanguageWorkedWith2017: HaveWorkedLanguage2016: tech_do --- Loading data and functions---
# import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import timeit %matplotlib inline df2019 = pd.read_csv('./2019survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False) df2018 = pd.read_csv('./2018survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False) df2017 = pd.read_csv('./2017survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False) df2016 = pd.read_csv('./2016survey_results.csv', header = 0, skipinitialspace= True, low_memory=False) df2015 = pd.read_csv('./2015survey_results.csv', header = 1, skipinitialspace= True, low_memory=False) display(df2019.head(), df2018.head(), df2017.head(), df2016.head(), df2015.head()) uniformize_dict = {'VB.NET':'Visual Basic / VBA', 'VBA':'Visual Basic / VBA', 'Visual Basic':'Visual Basic / VBA', 'Visual Basic 6':'Visual Basic / VBA' } #lists retrieved from previous notebooks not_lang_list = ['Android', 'AngularJS', 'Arduino', 'Arduino / Raspberry Pi', 'Cassandra', 'Cloud', 'Cloud (AWS, GAE, Azure, etc.)', 'Cordova', 'Hadoop', 'LAMP', 'MongoDB', 'Node.js', 'ReactJS', 'Redis', 'SQL Server', 'Salesforce', 'SharePoint', 'Spark', 'Windows Phone', 'Wordpress', 'WordPress', 'Write-In', 'iOS'] lang_comm_list = ['Bash/Shell/PowerShell', 'C', 'C#', 'C++', 'Clojure', 'F#', 'Go','HTML/CSS', 'Java', 'JavaScript', 'Objective-C', 'PHP', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Swift', 'Visual Basic / VBA'] mixed_all_list = ['Android', 'AngularJS', 'Arduino', 'Arduino / Raspberry Pi', 'Assembly', 'Bash/Shell', 'Bash/Shell/PowerShell', 'C', 'C#', 'C++', 'C++11', 'CSS', 'Cassandra', 'Clojure', 'Cloud', 'Cloud (AWS, GAE, Azure, etc.)', 'Cobol', 'CoffeeScript', 'Common Lisp', 'Cordova', 'Dart', 'Delphi/Object Pascal', 'Elixir', 'Erlang', 'F#', 'Go', 'Groovy', 'HTML', 'HTML/CSS', 'Hack', 'Hadoop', 'Haskell', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'LAMP', 'Lua', 'Matlab', 'MongoDB', 'Node.js', 'Objective-C', 'Ocaml', 'Other(s):', 'PHP', 'Perl', 'Python', 'R', 'ReactJS', 'Redis', 'Ruby', 'Rust', 'SQL', 'SQL Server', 'Salesforce', 'Scala', 'SharePoint', 'Sharepoint', 'Smalltalk', 'Spark', 'Swift', 'TypeScript', 'VB.NET', 'VBA', 'Visual Basic', 'Visual Basic 6', 'WebAssembly', 'Windows Phone', 'WordPress', 'Wordpress', 'Write-In', 'iOS'] lang_uncomm_list = ['Assembly', 'Cobol', 'CoffeeScript', 'Common Lisp', 'Dart', 'Delphi/Object Pascal', 'Elixir', 'Erlang', 'Groovy', 'Hack', 'Haskell', 'Julia', 'Kotlin', 'Lua', 'Matlab', 'Ocaml', 'Perl', 'Sharepoint', 'Smalltalk', 'TypeScript', 'WebAssembly',] lang_all_list = ['Assembly', 'Bash/Shell', 'Bash/Shell/PowerShell', 'C', 'C#', 'C++', 'C++11', 'CSS', 'Clojure', 'Cobol', 'CoffeeScript', 'Common Lisp', 'Dart', 'Delphi/Object Pascal', 'Elixir', 'Erlang', 'F#', 'Go', 'Groovy', 'HTML', 'HTML/CSS', 'Hack', 'Haskell', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lua', 'Matlab', 'Objective-C', 'Ocaml', 'Other(s):', 'PHP', 'Perl', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Sharepoint', 'Smalltalk', 'Swift', 'TypeScript', 'VB.NET', 'VBA', 'Visual Basic', 'Visual Basic 6', 'WebAssembly'] def clean_df_cat(df, to_remove): ''' Removes columns that match any of the values in the to_remove list ''' for item in to_remove: for col in df.columns: if item.casefold() == col.casefold(): df = df.drop(col, axis = 1) return df ''' #for 2015 data - multiple columns for one data category #converts a dataframe into a 1-d series to a 2 column df, #returned df has Columns and column counts #series sorted alphabetically ''' def make_counts2015(ini_df, series_name, index_name): series = ini_df.count() series = series.rename_axis(index_name) series = series.rename(series_name).sort_index() df = series.reset_index() # df.sort_values(by=[sort_by], inplace = True) return series, df def make_counts(ini_df, series_name, index_name): series = ini_df.sum() series = series.rename_axis(index_name) series = series.rename(series_name).sort_index() df = series.reset_index() # df.sort_values(by=[sort_by], inplace = True) return series, df # sorts a series and converts it to df ''' 2016? series - name of the series to be modified series_name - the desired name for the values (e.g.counts) index_name - the desired name for the index (e.g.Languages) resetindex_Y_N - are we resetting the index? ''' def ser_to_df(series, series_name, index_name, resetindex_Y_N): series = series.rename_axis(index_name) sorted_series = series.rename(series_name).sort_index() if resetindex_Y_N == 'Y': df = sorted_series.reset_index() else: df = pd.DataFrame(sorted_series) return sorted_series, df def eval_complex_col(df, col): ''' IN: df[col] - every str consists of one or more values (e.g. 'a, b, d') OUT: col_vals - All unique elements found in the column, listed alphabetically ''' col_num = df[df[col].isnull() == 0].shape[0] col_df = df[col].value_counts().reset_index() col_df.rename(columns={'index': col, col:'count'}, inplace = True) col_series = pd.Series(col_df[col].unique()).dropna() clean_list = col_series.str.split(pat = ';').tolist() flat_list = [] for sublist in clean_list: for item in sublist: flat_list.append(item) clean_series = pd.DataFrame(flat_list) clean_series[0] = clean_series[0].str.strip() col_vals = clean_series[0].unique() col_vals = pd.Series(sorted(col_vals)) cat_count = clean_series[0].value_counts() # print('Unique Categories: ', col_vals) return cat_count, col_vals ''' for years 2016-2019. processes a a specified column from the raw imported dataframe ''' def process_col(df, col): s = df[col] s = s.dropna() s_len = s.shape[0] cat_count, col_vals = eval_complex_col(df, col) s_split = s.str.split(pat = '; ') return s,s_len, s_split, cat_count, col_vals ''' 2017 to 2019? converts a series of lists into a df with each list as a row also returns a transposed version. ''' def s_of_lists_to_df(s): df = pd.DataFrame(item for item in s) df_transposed = df.transpose() return df, df_transposed def make_df_bool(df, df_transposed, vals_list): ''' creates a df of bool values based on whether each survey response has the value in vals_list. df: dataframe of survey responses, vals_list: list of values for conditions of the new columns, with 1 col per val ''' for item in vals_list: df[item] = df_transposed.iloc[:,:].isin([item]).sum() df_bool = df.loc[:,vals_list] return df_bool ''' Condensed function processing from initial imported df to boolean df Used for 2017-2019 ''' def process_data(df, col): df_droppedna, df_droppedna_len, df_droppedna_split, df_count, df_vals = process_col(df, col) df2, df2_transposed = s_of_lists_to_df(df_droppedna_split) df_bool = make_df_bool(df2, df2_transposed, df_vals) return df_bool, df_vals, df_droppedna_len #must edit the df to match the lang_comm_list first! def find_other_lang(df): other_lang = set(df.columns).difference(set(lang_comm_list)) other_lang_list = list(other_lang) return other_lang_list
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
--- 2015 Dataset---
#slicing the desired columns about Current Lang & Tech from the rest of the 2015 df #modify new df column names to match list df2015_mix = df2015.loc[:,'Current Lang & Tech: Android':'Current Lang & Tech: Write-In'] df2015_mix.columns = df2015_mix.columns.str.replace('Current Lang & Tech: ', '') #df2015_mix.columns = df2015_mix.columns.str.casefold() display(df2015_mix.info(), df2015_mix.head()) dflang2015 = clean_df_cat(df2015_mix, not_lang_list) print(df2015_mix.shape, dflang2015.shape) dflang2015 = dflang2015.rename(columns = {"C++": "C++_ini", "Visual Basic": "Visual Basic / VBA"}) dflang2015.head() #make the new column 'C++' with booleans dflang2015['C++'] = ((dflang2015['C++_ini'].isnull() == 0) | (dflang2015['C++11'].isnull() == 0)).astype(dtype = 'int') dflang2015['C++'] = dflang2015['C++'].replace(0, np.nan) dflang2015.head(n = 10) #double-checking that the new boolean column is correct print(dflang2015['C++_ini'].count(), dflang2015['C++11'].count(), dflang2015['C++'].count()) dflang2015.loc[:,('C++_ini', 'C++11', 'C++')].reindex().head(n=50) #take out the now defunct initial C++ column and C++11 column, #since they will affect the next fxn dflang2015 = dflang2015.drop(['C++_ini', 'C++11'], axis = 1) dflang2015.head() #obtain list of columns that need to be aggregated into an Others column other_lang_2015 = list(set(dflang2015.columns).difference(set(lang_comm_list))) other_lang_2015 #Combine the lowest popularity languages into column "Other(s)" dflang2015['Other(s)'] = ((dflang2015['Dart'].isnull() == 0) | (dflang2015['Haskell'].isnull() == 0) | (dflang2015['CoffeeScript'].isnull() == 0) | (dflang2015['Perl'].isnull() == 0) | (dflang2015['Matlab'].isnull() == 0)) dflang2015['Other(s)'] = dflang2015['Other(s)'].replace(False, np.nan) #drop the columns that were just used to create the 'Others' column dflang2015 = dflang2015.drop(other_lang_2015, axis = 1) display(dflang2015['Other(s)'].sum(), dflang2015.head(), dflang2015.info()) #make a new df of counts from the boolean df slang2015_counts, dflang2015_counts = make_counts2015(dflang2015, 'Count 2015', 'Languages') display(slang2015_counts, dflang2015_counts)
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
--- 2016 Dataset---
def process_2016_pt1(df, col): df_droppedna, df_droppedna_len, df_droppedna_split, df_count, df_vals = process_col(df, col) df_new, df_new_transposed = s_of_lists_to_df(df_droppedna_split) return df_new, df_new_transposed, df_vals, df_droppedna_len dftech2016_new, dftech2016_new_tp, tech2016_vals, tech2016_len = process_2016_pt1(df2016, 'tech_do') display(dftech2016_new, tech2016_vals) lang2016_list = sorted(list(set(tech2016_vals).difference(set(not_lang_list)))) lang2016_list dflang2016_bool = make_df_bool(dftech2016_new, dftech2016_new_tp, lang2016_list) display(dflang2016_bool) dflang2016_bool = dflang2016_bool.rename(columns = {"Visual Basic": "Visual Basic / VBA"}) dflang2016_bool.head() other_lang2016_list = find_other_lang(dflang2016_bool) other_lang2016_list dflang2016_bool['Other(s)'] = (dflang2016_bool['Dart'] | dflang2016_bool['CoffeeScript'] | dflang2016_bool['Haskell'] | dflang2016_bool['Perl'] | dflang2016_bool['Matlab']) dflang2016_bool = dflang2016_bool.drop(other_lang2016_list, axis = 1) dflang2016_bool slang2016_counts, dflang2016_counts = make_counts(dflang2016_bool,'Counts 2016', 'Languages') dflang2016_counts
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
--- 2017 Dataset---
df_droppedna, df_droppedna_split, df_count, df_vals = process_col(df, col) def process_data_extended(df, col): ''' for years 2017-2019. processes a specified column from the raw imported dataframe ''' s = df[col] s = s.dropna() df_len = s.shape[0] df_count, df_vals = eval_complex_col(df, col) s_split = s.str.split(pat = '; ') df_new = pd.DataFrame(item for item in s_split) df_new_transposed = df_new.transpose() for item in df_vals: df_new[item] = df_new_transposed.iloc[:,:].isin([item]).sum() df_bool = df_new.loc[:,df_vals] return df_bool, df_vals, df_len dflang2017_bool, lang2017_vals, lang2017_len = process_data_extended(df2017, 'HaveWorkedLanguage') display(dflang2017_bool, lang2017_vals, lang2017_len) dflang2017_bool['Visual Basic / VBA'] = (dflang2017_bool['VB.NET'] | dflang2017_bool['VBA'] | dflang2017_bool['Visual Basic 6']) dflang2017_bool = dflang2017_bool.drop(['VB.NET', 'VBA', 'Visual Basic 6'], axis = 1) display(dflang2017_bool) other_lang2017_list = find_other_lang(dflang2017_bool) other_lang2017_list for elem in other_lang2017_list: print("dflang2017_bool['" + elem + "'] |") dflang2017_bool['Other(s)'] = (dflang2017_bool['Assembly'] | dflang2017_bool['Hack'] | dflang2017_bool['Erlang'] | dflang2017_bool['Smalltalk'] | dflang2017_bool['Perl'] | dflang2017_bool['Common Lisp'] | dflang2017_bool['Julia'] | dflang2017_bool['TypeScript'] | dflang2017_bool['Lua'] | dflang2017_bool['Dart'] | dflang2017_bool['Elixir'] | dflang2017_bool['CoffeeScript'] | dflang2017_bool['Matlab'] | dflang2017_bool['Groovy'] | dflang2017_bool['Haskell']) dflang2017_bool = clean_df_cat(dflang2017_bool, other_lang2017_list) slang2017_counts, dflang2017_counts = make_counts(dflang2017_bool,'Counts 2017', 'Languages') dflang2017_counts
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
--- 2018 Dataset---
dflang2018_bool, lang2018_vals, lang2018_len = process_data(df2018, 'LanguageWorkedWith') lang2018_vals dflang2018_bool['Visual Basic / VBA'] = (dflang2018_bool['VB.NET'] | dflang2018_bool['VBA'] | dflang2018_bool['Visual Basic 6']) dflang2018_bool = dflang2018_bool.drop(['VB.NET', 'VBA', 'Visual Basic 6'], axis = 1) dflang2018_bool['HTML/CSS'] = (dflang2018_bool['HTML'] | dflang2018_bool['CSS']) dflang2018_bool = dflang2018_bool.drop(['HTML', 'CSS'], axis = 1) other_lang2018_list = find_other_lang(dflang2018_bool) other_lang2018_list for elem in other_lang2018_list: print("dflang2018_bool['" + elem + "'] |") other_lang_str = '' for elem in other_lang18_list: other_lang_str = other_lang_str + "dflang2018_bool['" + elem +"']" if elem != 'TypeScript': other_lang_str = other_lang_str + " | " else: break other_lang_str dflang2018_bool['Other(s)'] = (dflang2018_bool['Assembly'] | dflang2018_bool['Delphi/Object Pascal'] | dflang2018_bool['Hack'] | dflang2018_bool['Erlang'] | dflang2018_bool['Cobol'] | dflang2018_bool['Perl'] | dflang2018_bool['TypeScript'] | dflang2018_bool['Julia'] | dflang2018_bool['Lua'] | dflang2018_bool['Ocaml'] | dflang2018_bool['CoffeeScript'] | dflang2018_bool['Matlab'] | dflang2018_bool['Kotlin'] | dflang2018_bool['Bash/Shell'] | dflang2018_bool['Groovy'] | dflang2018_bool['Haskell']) dflang2018_bool = clean_df_cat(dflang2018_bool, other_lang2018_list) slang2018_counts, dflang2018_counts = make_counts(dflang2018_bool,'Counts 2018', 'Languages') dflang2018_counts
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
--- 2019 Dataset---
dflang2019_bool, lang2019_vals, lang2019_len = process_data(df2019, 'LanguageWorkedWith') lang2019_vals dflang2019_bool = dflang2019_bool.rename(columns = {"VBA": "Visual Basic / VBA", "Other(s):": "Other"}) other_lang2019_list = find_other_lang(dflang2019_bool) other_lang2019_list = sorted(other_lang2019_list) other_lang2019_list for elem in other_lang2019_list: print("dflang2019_bool['" + elem + "'] |") dflang2019_bool['Other(s)'] = (dflang2019_bool['Assembly'] | dflang2019_bool['Dart'] | dflang2019_bool['Elixir'] | dflang2019_bool['Erlang'] | dflang2019_bool['Kotlin'] | dflang2019_bool['Other'] | dflang2019_bool['TypeScript'] | dflang2019_bool['WebAssembly']) dflang2019_bool = clean_df_cat(dflang2019_bool, other_lang2019_list) slang2019_counts, dflang2019_counts = make_counts(dflang2019_bool,'Counts 2019', 'Languages') dflang2019_counts
_____no_output_____
CNRI-Python
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
RadarCOVID-Report Data Extraction
import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np import pandas as pd import pycountry import retry import seaborn as sns %matplotlib inline current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") current_hour = datetime.datetime.utcnow().hour are_today_results_partial = current_hour != 23
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Constants
from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Parameters
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier report_backend_identifier environment_enable_multi_backend_download = \ os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD") if environment_enable_multi_backend_download: report_backend_identifiers = None else: report_backend_identifiers = [report_backend_identifier] report_backend_identifiers environment_invalid_shared_diagnoses_dates = \ os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES") if environment_invalid_shared_diagnoses_dates: invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",") else: invalid_shared_diagnoses_dates = [] invalid_shared_diagnoses_dates
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
COVID-19 Cases
report_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=report_backend_identifier) @retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10)) def download_cases_dataframe(): return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv") confirmed_df_ = download_cases_dataframe() confirmed_df_.iloc[0] confirmed_df = confirmed_df_.copy() confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]] confirmed_df.rename( columns={ "date": "sample_date", "iso_code": "country_code", }, inplace=True) def convert_iso_alpha_3_to_alpha_2(x): try: return pycountry.countries.get(alpha_3=x).alpha_2 except Exception as e: logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}") return None confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2) confirmed_df.dropna(inplace=True) confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True) confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_df.sort_values("sample_date", inplace=True) confirmed_df.tail() confirmed_days = pd.date_range( start=confirmed_df.iloc[0].sample_date, end=extraction_datetime) confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"]) confirmed_days_df["sample_date_string"] = \ confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_days_df.tail() def sort_source_regions_for_display(source_regions: list) -> list: if report_backend_identifier in source_regions: source_regions = [report_backend_identifier] + \ list(sorted(set(source_regions).difference([report_backend_identifier]))) else: source_regions = list(sorted(source_regions)) return source_regions report_source_regions = report_backend_client.source_regions_for_date( date=extraction_datetime.date()) report_source_regions = sort_source_regions_for_display( source_regions=report_source_regions) report_source_regions def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None): source_regions_at_date_df = confirmed_days_df.copy() source_regions_at_date_df["source_regions_at_date"] = \ source_regions_at_date_df.sample_date.apply( lambda x: source_regions_for_date_function(date=x)) source_regions_at_date_df.sort_values("sample_date", inplace=True) source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \ source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x))) source_regions_at_date_df.tail() #%% source_regions_for_summary_df_ = \ source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy() source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True) source_regions_for_summary_df_.tail() #%% confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"] confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns) for source_regions_group, source_regions_group_series in \ source_regions_at_date_df.groupby("_source_regions_group"): source_regions_set = set(source_regions_group.split(",")) confirmed_source_regions_set_df = \ confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy() confirmed_source_regions_group_df = \ confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \ .reset_index().sort_values("sample_date") confirmed_source_regions_group_df = \ confirmed_source_regions_group_df.merge( confirmed_days_df[["sample_date_string"]].rename( columns={"sample_date_string": "sample_date"}), how="right") confirmed_source_regions_group_df["new_cases"] = \ confirmed_source_regions_group_df["new_cases"].clip(lower=0) confirmed_source_regions_group_df["covid_cases"] = \ confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round() confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[confirmed_output_columns] confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan) confirmed_source_regions_group_df.fillna(method="ffill", inplace=True) confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[ confirmed_source_regions_group_df.sample_date.isin( source_regions_group_series.sample_date_string)] confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df) result_df = confirmed_output_df.copy() result_df.tail() #%% result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True) result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left") result_df.sort_values("sample_date_string", inplace=True) result_df.fillna(method="ffill", inplace=True) result_df.tail() #%% result_df[["new_cases", "covid_cases"]].plot() if columns_suffix: result_df.rename( columns={ "new_cases": "new_cases_" + columns_suffix, "covid_cases": "covid_cases_" + columns_suffix}, inplace=True) return result_df, source_regions_for_summary_df_ confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe( report_backend_client.source_regions_for_date) confirmed_es_df, _ = get_cases_dataframe( lambda date: [spain_region_country_code], columns_suffix=spain_region_country_code.lower())
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Extract API TEKs
raw_zip_path_prefix = "Data/TEKs/Raw/" base_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( backend_identifiers=report_backend_identifiers, generation_days=backend_generation_days, fail_on_error_backend_identifiers=base_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() def compute_keys_cross_sharing(x): teks_x = x.key_data_x.item() common_teks = set(teks_x).intersection(x.key_data_y.item()) common_teks_fraction = len(common_teks) / len(teks_x) return pd.Series(dict( common_teks=common_teks, common_teks_fraction=common_teks_fraction, )) multi_backend_exposure_keys_by_region_df = \ multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index() multi_backend_exposure_keys_by_region_df["_merge"] = True multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_df.merge( multi_backend_exposure_keys_by_region_df, on="_merge") multi_backend_exposure_keys_by_region_combination_df.drop( columns=["_merge"], inplace=True) if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1: multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_combination_df[ multi_backend_exposure_keys_by_region_combination_df.region_x != multi_backend_exposure_keys_by_region_combination_df.region_y] multi_backend_exposure_keys_cross_sharing_df = \ multi_backend_exposure_keys_by_region_combination_df \ .groupby(["region_x", "region_y"]) \ .apply(compute_keys_cross_sharing) \ .reset_index() multi_backend_cross_sharing_summary_df = \ multi_backend_exposure_keys_cross_sharing_df.pivot_table( values=["common_teks_fraction"], columns="region_x", index="region_y", aggfunc=lambda x: x.item()) multi_backend_cross_sharing_summary_df multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head()
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop(
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Dump API TEKs
tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier] tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_base_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_base_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Load TEK Dumps
import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Daily New TEKs
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \ [["upload_date", "shared_teks"]].rename( columns={ "upload_date": "sample_date_string", "shared_teks": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.head() estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \ .groupby(["upload_date"]).shared_teks.max().reset_index() \ .sort_values(["upload_date"], ascending=False) \ .rename(columns={ "upload_date": "sample_date_string", "shared_teks": "shared_diagnoses", }) invalid_shared_diagnoses_dates_mask = \ estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates) estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0 estimated_shared_diagnoses_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Hourly New TEKs
hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() hourly_summary_df = hourly_new_tek_count_df.copy() hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Official Statistics
import requests import pandas.io.json official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics") official_stats_response.raise_for_status() official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json()) official_stats_df = official_stats_df_.copy() official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True) official_stats_df.head() official_stats_column_map = { "date": "sample_date", "applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated", "communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated", } accumulated_suffix = "_accumulated" accumulated_values_columns = \ list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values())) interpolated_values_columns = \ list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns)) official_stats_df = \ official_stats_df[official_stats_column_map.keys()] \ .rename(columns=official_stats_column_map) official_stats_df["extraction_date"] = extraction_date official_stats_df.head() official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json" previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True) previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True) official_stats_df = official_stats_df.append(previous_official_stats_df) official_stats_df.head() official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)] official_stats_df.sort_values("extraction_date", ascending=False, inplace=True) official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True) official_stats_df.head() official_stats_stored_df = official_stats_df.copy() official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d") official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True) official_stats_df.drop(columns=["extraction_date"], inplace=True) official_stats_df = confirmed_days_df.merge(official_stats_df, how="left") official_stats_df.sort_values("sample_date", ascending=False, inplace=True) official_stats_df.head() official_stats_df[accumulated_values_columns] = \ official_stats_df[accumulated_values_columns] \ .astype(float).interpolate(limit_area="inside") official_stats_df[interpolated_values_columns] = \ official_stats_df[accumulated_values_columns].diff(periods=-1) official_stats_df.drop(columns="sample_date", inplace=True) official_stats_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Data Merge
result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( official_stats_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df = confirmed_es_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left") result_summary_df.set_index(["sample_date", "source_regions"], inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0) result_summary_df.head(daily_plot_days) def compute_aggregated_results_summary(days) -> pd.DataFrame: aggregated_result_summary_df = result_summary_df.copy() aggregated_result_summary_df["covid_cases_for_ratio"] = \ aggregated_result_summary_df.covid_cases.mask( aggregated_result_summary_df.shared_diagnoses == 0, 0) aggregated_result_summary_df["covid_cases_for_ratio_es"] = \ aggregated_result_summary_df.covid_cases_es.mask( aggregated_result_summary_df.shared_diagnoses_es == 0, 0) aggregated_result_summary_df = aggregated_result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(days).agg({ "covid_cases": "sum", "covid_cases_es": "sum", "covid_cases_for_ratio": "sum", "covid_cases_for_ratio_es": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum", "shared_diagnoses_es": "sum", }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int) aggregated_result_summary_df["teks_per_shared_diagnosis"] = \ (aggregated_result_summary_df.shared_teks_by_upload_date / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \ (aggregated_result_summary_df.shared_diagnoses / aggregated_result_summary_df.covid_cases_for_ratio).fillna(0) aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \ (aggregated_result_summary_df.shared_diagnoses_es / aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0) return aggregated_result_summary_df aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7) aggregated_result_with_7_days_window_summary_df.head() last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1] last_7_days_summary aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13) last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1] last_14_days_summary
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Report Results
display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "source_regions": "Source Countries", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend", "region_x": "Backend\u00A0(A)", "region_y": "Backend\u00A0(B)", "common_teks": "Common TEKs Shared Between Backends", "common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)", "covid_cases": "COVID-19 Cases (Source Countries)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)", "shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)", "shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)", "shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)", "covid_cases_es": "COVID-19 Cases (Spain)", "app_downloads_es": "App Downloads (Spain – Official)", "shared_diagnoses_es": "Shared Diagnoses (Spain – Official)", "shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)", } summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", "covid_cases_es", "app_downloads_es", "shared_diagnoses_es", "shared_diagnoses_per_covid_case_es", ] summary_percentage_columns= [ "shared_diagnoses_per_covid_case_es", "shared_diagnoses_per_covid_case", ]
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Daily Summary Table
result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Daily Summary Plots
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .droplevel(level=["source_regions"]) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 30), legend=False) ax_ = summary_ax_list[0] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist())) for percentage_column in summary_percentage_columns: percentage_column_index = summary_columns.index(percentage_column) summary_ax_list[percentage_column_index].yaxis \ .set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead. layout[ax.rowNum, ax.colNum] = ax.get_visible() /opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead. layout[ax.rowNum, ax.colNum] = ax.get_visible() /opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead. if not layout[ax.rowNum + 1, ax.colNum]: /opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead. if not layout[ax.rowNum + 1, ax.colNum]:
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Daily Generation to Upload Period Table
display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Hourly Summary Plots
hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Publish Results
github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "", display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "", } general_columns = \ list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values())) general_formatter = lambda x: f"{x}" if x != 0 else "" display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns))) daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) def format_multi_backend_cross_sharing_fraction(x): if pd.isna(x): return "-" elif round(x * 100, 1) == 0: return "" else: return f"{x:.1%}" multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html( classes="table-center", formatters=display_formatters, float_format=format_multi_backend_cross_sharing_fraction) multi_backend_cross_sharing_summary_table_html = \ multi_backend_cross_sharing_summary_table_html \ .replace("<tr>","<tr style=\"text-align: center;\">") extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.item() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.item() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.item() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.item() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.item() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) display_source_regions = ", ".join(report_source_regions) if len(report_source_regions) == 1: display_brief_source_regions = report_source_regions[0] else: display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺" def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi df = df.copy() df_styler = df.style.format(display_formatters) media_path = get_temporary_image_path() dfi.export(df_styler, media_path) return media_path summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Save Results
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") multi_backend_cross_sharing_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png")
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Publish Results as JSON
def generate_summary_api_results(df: pd.DataFrame) -> list: api_df = df.reset_index().copy() api_df["sample_date_string"] = \ api_df["sample_date"].dt.strftime("%Y-%m-%d") api_df["source_regions"] = \ api_df["source_regions"].apply(lambda x: x.split(",")) return api_df.to_dict(orient="records") summary_api_results = \ generate_summary_api_results(df=result_summary_df) today_summary_api_results = \ generate_summary_api_results(df=extraction_date_result_summary_df)[0] summary_results = dict( backend_identifier=report_backend_identifier, source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=0, ), today=today_summary_api_results, last_7_days=last_7_days_summary, last_14_days=last_14_days_summary, daily_results=summary_api_results) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Publish on README
with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Publish on Twitter
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule" and \ (shared_teks_by_upload_date_last_hour or not are_today_results_partial): import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] if are_today_results_partial: today_addendum = " (Partial)" else: today_addendum = "" def format_shared_diagnoses_per_covid_case(value) -> str: if value == 0: return "–" return f"≤{value:.2%}" display_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case) display_last_14_days_shared_diagnoses_per_covid_case = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"]) display_last_14_days_shared_diagnoses_per_covid_case_es = \ format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"]) status = textwrap.dedent(f""" #RadarCOVID – {extraction_date_with_hour} Today{today_addendum}: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: ≤{shared_diagnoses:.0f} - Usage Ratio: {display_shared_diagnoses_per_covid_case} Last 14 Days: - Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case} - Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es} Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
Exercise 1.10I have a fair coin and a two-headed coin. I choose one of the two coins randomly with equal probability and flip it. Given that the flip was heads, what is the probability that I flipped the two-headed coin?**Solution:**Let $F$ denote picking the fair coin and $T$ picking the two-headed coin, respectively. Furthermore, let $H$ denote the event that the chosen coin shows head.Then we have$$P(T | H) =\frac{P(H \cap T)}{P(H)} =\frac{P(H | T) \cdot P(H)}{P(T \cap H) + P(F \cap H)} =\frac{P(H | T) \cdot P(H)}{P(H|T) \cdot P(T) + P(H|F) \cdot P(F)} =\frac{1 \cdot \tfrac 1 2}{1 \cdot \tfrac 1 2 + \tfrac 1 2 \tfrac 1 2} =\frac 2 3.$$We can check this by simulation.
num_samples = 100000 chosen_coin = np.random.randint(low=0, high=2, size=num_samples) # 0 = fair, 1 = two-headed heads = np.random.randint(low=0, high=2, size=num_samples) + chosen_coin > 0 (chosen_coin * heads).sum() / heads.sum()
_____no_output_____
MIT
chapter01/exercise10.ipynb
soerenberg/probability-and-computing-exercises
Project: Tweets Data Analysis Table of ContentsIntroduction Data Wrangling Data Gathering Data Assessing Data Cleaning Exploratory Data AnalysisConclusions Introduction> wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering, then assessing and cleaning is required for "Wow!"-worthy analyses and visualizations.
# import the packages will be used through the project import numpy as np import pandas as pd # for twitter API import tweepy from tweepy import OAuthHandler import json from timeit import default_timer as timer import requests import tweepy import json import os import re # for Exploratory Data Analysis visually import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # activating the seaborn sns.set()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Data Wrangling 1- Gathering Data (A) gathering twitter archivement dog rates Data from the provided csv file
# Load your data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. twitter_archive = pd.read_csv('twitter-archive-enhanced.csv') twitter_archive.head(5)
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
(B) Geting data from file (image_predictions.tsv) which is hosted on Udacity's servers and should be downloaded programmatically using the Requests library a
# Scrape the image predictions file from the Udacity website url = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' response = requests.get(url) with open(os.path.join('image_predictions.tsv'), mode = 'wb') as file: file.write(response.content) # Load the image predictions file # using \t beacuse it's "Tab Separated Value" file images = pd.read_csv('image_predictions.tsv', sep = '\t') images.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
(C) Getting data from twitter API
# Query Twitter API for each tweet in the Twitter archive and save JSON in a text file # These are hidden to comply with Twitter's API terms and conditions consumer_key = 'JRJCYqpq8QnnQde8W60rPUwwb' consumer_secret = 'bysFJFrg0sjpWXmMV4EmePkOxLOPvmIgcbB3v0ZrwxrqhTD3bf' access_token = '307362468-CwCujZZ0OaFQ3Ut2xf4dNlEkxuTVADOQkmhj6A2U' access_secret = 'mYAXhcUOmPdUduQMyRbUXZrmcSPy36j7a9aqS6I4aHmWV' auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True) # NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES: # df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to # change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv # NOTE TO REVIEWER: this student had mobile verification issues so the following # Twitter API code was sent to this student from a Udacity instructor # Tweet IDs for which to gather additional data via Twitter's API df_1 = twitter_archive tweet_ids = df_1.tweet_id.values len(tweet_ids) # Query Twitter's API for JSON data for each tweet ID in the Twitter archive count = 0 fails_dict = {} start = timer() # Save each tweet's returned JSON as a new line in a .txt file with open('tweet_json.txt', 'w') as outfile: # This loop will likely take 20-30 minutes to run because of Twitter's rate limit for tweet_id in tweet_ids: count += 1 print(str(count) + ": " + str(tweet_id)) try: tweet = api.get_status(tweet_id, tweet_mode='extended') print("Success") json.dump(tweet._json, outfile) outfile.write('\n') except tweepy.TweepError as e: print("Fail") fails_dict[tweet_id] = e pass end = timer() print(end - start) print(fails_dict) # printing the len of the fails dict and the time in that had taken to make this list in minutes len(fails_dict), 1834.9294512/60 tweets_list = [] with open('tweet_json.txt', 'r') as json_file: # Read the .txt file line by line into a list of dictionaries for line in json_file: twitter_data = json.loads(line) tweets_list.append({'tweet_id': twitter_data['id_str'], 'retweet_count': twitter_data['retweet_count'], 'favorite_count': twitter_data['favorite_count'], 'favorite_count': twitter_data['favorite_count'], 'followers_count': twitter_data['user']['followers_count']}) # Convert the list of dictionaries to a pandas DataFrame tweets_data = pd.DataFrame(tweets_list, columns=['tweet_id', 'retweet_count', 'favorite_count', 'followers_count']) tweets_data.head() tweets_data.to_csv('tweets_data.csv')
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
2- Data Assessing (A) viusal Assessing
# Display the twitter_archive table twitter_archive.head() twitter_archive twitter_archive[twitter_archive['expanded_urls'].isnull() == False] twitter_archive['text'][1] twitter_archive['rating_denominator'].value_counts() twitter_archive.nunique() twitter_archive.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null int64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(4), int64(3), object(10) memory usage: 313.0+ KB
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
The columns of twitter_archive dataframe > * *tweet_id* => the unique identifier for each tweet.> * *in_reply_to_status_id* => the id of replay tweet> * *in_reply_to_user_id* => the id of replay user > * *timestamp* => the tweet post time > * *source* => the url of the twitter > * *text* => text writen with the pic > * *retweeted_status_id* => retweeted status id > * *retweeted_status_user_id* => retweeted status user id > * *retweeted_status_timestamp* => > * *expanded_urls* => the url of the twitter tweet > * *rating_numerator* => rating numerator > * *rating_denominator* => rating denominator > * *name* => the name of the bog > * *doggo* => doggo dog breed > * *floofer* => floofer dog breed > * *pupper* => pupper dog breed > * *puppo* => pupper dog breed
images.head() images.tail() images.nunique()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
The columns of images dataframe and i'll use in my analysis > * tweet_id ==> tweet_id > * jpg_url ==> image link > * p1 the ==> probiltiy of a certen bread > * p1_conf ==> the probility of being this bread > * p1_dog the ==> if the value is true or false
# Display the tweets_data table tweets_data.head() # Display the tweets_data table tweets_data.tail() tweets_data.sample(5)
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
The columns of tweets_data dataframe > * tweet_id ==> the unique identifier for each tweet. > * retweet_num ==> image link > * favorite_num ==> probiltiy of a certen bread > * followers_num ==> the probility of being this bread---- (B) programming Assessing
twitter_archive.info() twitter_archive.isnull().sum() twitter_archive.name.value_counts() twitter_archive.isnull().sum().sum() twitter_archive.describe() twitter_archive.sample(5) twitter_archive.sample(5) twitter_archive.rating_denominator.value_counts() twitter_archive[twitter_archive['rating_denominator'] == 110] images.info() tweets_data.info() tweets_data.isnull().sum()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Quality issues: Twitter archive (`twitter_archive`) table* `tweet_id` data type is an int not a string* `timestamp`, `retweeted_status_timestamp` are a string not datatime* `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, retweeted_status_user_id they have alot of missing value as well as there data type is float not string.* column `doggo`, `floofer`, `pupper`, `puppo`: values are 'None' with type of string instead instead of boolean(true, false) to be easy to use.* removing the non need columns* some names are just one lowercase letter* clean the source to be only 3 values Image prediction (`images`) table* tweet_id is an int type not string* colomn `p1_dog`,`p2_dog`,`p3_dog`: sometimes all of them as false value or all of them have true value* colomn p1 ,p2, p3 have names start with lowercase and uppercase so we have to make everything lower case Twitter API (`tweets_data`) table has null values we should remove the rows with null Tidiness Twitter archive (`twitter_archive`) table- there are 2column for rate and they should be only one column- `doggo`, `floofer`, `pupper` & `puppo` columns should be merged to one column named `dog_stage` Image prediction (`images`) table* Some column names need to be more descriptive,`jpg_url` to `img_link`.* Image predictions table should be added to twitter archive table. Twitter API (`tweets_data`) table- twitter api table columns `retweet_count`, `favorite_count`, `followers_count` should be added to twitter archive table. 3- Data Cleaning
# making a copy to work one archive_clean = twitter_archive.copy() images_clean = images.copy() tweets_clean = tweets_data.copy()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Definechaning the rate data type to be float
archive_clean.rating_numerator = archive_clean.rating_numerator.astype(float,copy=False) # test archive_clean.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 17 columns): tweet_id 2356 non-null int64 in_reply_to_status_id 78 non-null float64 in_reply_to_user_id 78 non-null float64 timestamp 2356 non-null object source 2356 non-null object text 2356 non-null object retweeted_status_id 181 non-null float64 retweeted_status_user_id 181 non-null float64 retweeted_status_timestamp 181 non-null object expanded_urls 2297 non-null object rating_numerator 2356 non-null float64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(5), int64(2), object(10) memory usage: 313.0+ KB
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Define fixing the data in `rating_numerator` column as in row `46` it's value should be 13.5 but it's only 5 in the data
# for avoiding "This pattern has match groups" error from ipython notebook import warnings warnings.filterwarnings("ignore", 'This pattern has match groups') # diplaying the rows that has the problem archive_clean[archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")][['text', 'rating_numerator']] # storing the index of the rows which have the problem inds = archive_clean[archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")].index inds def fix_rate(inds,col_name): # this function take the indexs and the column name we want to fix the rate in # extract the the true value from text columm # then fix the value in ratting_colimn # returns the fixed value for i in inds: txt = archive_clean.loc[i]['text'] m = re.search(r"(\d+\.\d*\/\d+)", txt) n = (m.group(1)).split('/')[0] n = float(n) archive_clean.loc[i, col_name] = n return archive_clean.iloc[inds][col_name] # fix the rating fix_rate(inds,'rating_numerator') # test the fix archive_clean[archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")][['text', 'rating_numerator']]
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Define There are retweets should be removed
tweets_clean.drop('retweet_count',axis=1,inplace=True) tweets_clean.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2339 entries, 0 to 2338 Data columns (total 3 columns): tweet_id 2339 non-null object favorite_count 2339 non-null int64 followers_count 2339 non-null int64 dtypes: int64(2), object(1) memory usage: 54.9+ KB
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
define: removing the un unnecessary columns for my analysis
# drop the column form archive_clean table archive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp','expanded_urls','text'],axis=1,inplace=True) # test archive_clean.info() # drop the column form archive_clean table images.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2075 entries, 0 to 2074 Data columns (total 12 columns): tweet_id 2075 non-null int64 jpg_url 2075 non-null object img_num 2075 non-null int64 p1 2075 non-null object p1_conf 2075 non-null float64 p1_dog 2075 non-null bool p2 2075 non-null object p2_conf 2075 non-null float64 p2_dog 2075 non-null bool p3 2075 non-null object p3_conf 2075 non-null float64 p3_dog 2075 non-null bool dtypes: bool(3), float64(3), int64(2), object(4) memory usage: 152.1+ KB
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Define:the source column has 3 urls and it will be nicer and cleaner to make a word for each segmentation.
archive_clean.source.value_counts() url_1 = '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>' url_2 = '<a href="http://vine.co" rel="nofollow">Vine - Make a Scene</a>' url_3 = '<a href="https://about.twitter.com/products/tweetdeck" rel="nofollow">TweetDeck</a>' url_4 = '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>' ind_1 = archive_clean[archive_clean['source'] == url_1]['source'].index ind_2 = archive_clean[archive_clean['source'] == url_2]['source'].index ind_3 = archive_clean[archive_clean['source'] == url_3]['source'].index ind_4 = archive_clean[archive_clean['source'] == url_4]['source'].index archive_clean.loc[ind_1, 'source'] = 'twitter_for_iphone' archive_clean.loc[ind_2, 'source'] = 'vine' archive_clean.loc[ind_3, 'source'] = 'tweet_deck' archive_clean.loc[ind_4, 'source'] = 'twitter_web' archive_clean.source.value_counts() # test archive_clean.source.value_counts()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
define:fix data types of the ids to make it easy to merge the tableshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html
# Convert tweet_id to str for the tables archive_clean.tweet_id = archive_clean.tweet_id.astype(str,copy=False) images_clean.tweet_id = images_clean.tweet_id.astype(str,copy=False) archive_clean.tweet_id = archive_clean.tweet_id.astype(str,copy=False) archive_clean.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2356 entries, 0 to 2355 Data columns (total 10 columns): tweet_id 2356 non-null object timestamp 2356 non-null object source 2356 non-null object rating_numerator 2356 non-null float64 rating_denominator 2356 non-null int64 name 2356 non-null object doggo 2356 non-null object floofer 2356 non-null object pupper 2356 non-null object puppo 2356 non-null object dtypes: float64(1), int64(1), object(8) memory usage: 184.1+ KB
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Define:some nomert Define:fix the to make the archive_clean timestamp datatype to be the datetime
# convert timestamp to datetime data type archive_clean.timestamp = pd.to_datetime(archive_clean.timestamp)
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
define: fixing the name column in twitter_clean as there some name is just on the small letter, so I'll replace them by an empty string.
archive_clean.name #replace names lowercase letters with '' archive_clean.name = archive_clean.name.str.replace('(^[a-z]*)', '') #replace '' letters with 'None' archive_clean.name = archive_clean.name.replace('', 'None') # test archive_clean.name.value_counts() #test for the letters archive_clean.query('name == "(^[a-z]*)"').count()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
define tweets_clean data have nulls and we have to remove themso we'll drop all the nulls from or dataset
tweets_clean.isnull().sum() tweets_data.isnull().sum() tweets_clean.dropna(axis=0, inplace=True) tweets_clean.isnull().sum()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Define: in tweets_clean (from the API) we need to change the data type of favorite_count and followers_count to be int
tweets_clean.info() tweets_clean.favorite_count = tweets_clean.favorite_count.astype(int,copy=False) tweets_clean.followers_count = tweets_clean.followers_count.astype(int,copy=False) #test tweets_clean.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 2339 entries, 0 to 2338 Data columns (total 3 columns): tweet_id 2339 non-null object favorite_count 2339 non-null int32 followers_count 2339 non-null int32 dtypes: int32(2), object(1) memory usage: 54.8+ KB
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
define:* column p1 ,p2, p3 have names start with lowercase and uppercase so we have to make everything lower case
images_clean['p1'] = images_clean['p1'].str.lower() images_clean['p2'] = images_clean['p2'].str.lower() images_clean['p3'] = images_clean['p3'].str.lower() #test images_clean.head() tweets_clean.head() archive_clean.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Definerename the jpg_url to img_link
images_clean.head() images_clean.rename(columns={'jpg_url':'img_link'},inplace=True) #test images_clean.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
(2) Tidy Define:1- making the rating_numerator and rating_denominator to one rating column in archive_clean then remove the two columns
# making and adding the column to the archive_clean dataset archive_clean['rating'] = archive_clean['rating_numerator']/archive_clean['rating_denominator'] #test archive_clean.head() # drop the rating_numerator and rating_denominator columns archive_clean.drop(['rating_numerator','rating_denominator'],axis=1, inplace=True) #test archive_clean.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Define:2- making the doggo, floofer,pupper and puppo to one dog_stage column in archive_clean then remove the two columns
#1- replace to all the null value in the column def remove_None(df, col_name,value): # take the df name and col_name and return the col with no None word ind = df[df[col_name] == value][col_name].index df.loc[ind, col_name] = '' return df.head() # replace to all the None value in the column remove_None(archive_clean, 'doggo',"None") remove_None(archive_clean, 'floofer',"None") remove_None(archive_clean, 'pupper',"None") remove_None(archive_clean, 'puppo',"None") # we will melt them together in dog_stage archive_clean['dog_stage'] = archive_clean['doggo'] + archive_clean['floofer'] + archive_clean['pupper'] + archive_clean['puppo'] # test archive_clean['dog_stage'].value_counts(), archive_clean['doggo'].value_counts() # then we will take any unexpect value to be multiple archive_clean['dog_stage'].replace('', "multiple", inplace=True) archive_clean['dog_stage'].replace('doggopupper', "multiple", inplace=True) archive_clean['dog_stage'].replace('doggopuppo', "multiple", inplace=True) archive_clean['dog_stage'].replace('doggofloofer', "multiple", inplace=True) # test archive_clean['dog_stage'].value_counts() # test archive_clean.head() # droping the columns out archive_clean.drop(['doggo','floofer','pupper','puppo'],axis=1, inplace=True) # test archive_clean.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
now the `archive_clean` ready to join the other tables define 3- in the images_clean dataset i have picked from the 3 Ps one accourding to the highest confedent
images_clean.head() #define dog_breed function to separate out the 3 columns of breed into one with highest confidence ratio def get_p(r): max_num = max(r.p1_conf ,r.p2_conf, r.p3_conf) if r.p1_conf == max_num: return r.p1 elif r.p2_conf == max_num: return r.p2 elif r.p3_conf == max_num: return r.p3 def get_p_conf(r): return max(r.p1_conf ,r.p2_conf, r.p3_conf) images_clean['breed'] = images_clean.apply(get_p, axis=1) images_clean['p_conf'] = images_clean.apply(get_p_conf, axis=1) images_clean.head() # droping the columns images_clean.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'],axis=1,inplace=True) # test images_clean.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
now the data is clean so we ready to marge them to on column
#merge the two tables twitter_archive_master = pd.merge(left=archive_clean, right=images_clean, how='inner', on='tweet_id') twitter_archive_master = pd.merge(left=twitter_archive_master, right=tweets_clean, how='inner', on='tweet_id') twitter_archive_master.info() twitter_archive_master.head()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
saving the clean data
# saving the data fram to csv file twitter_archive_master.to_csv('twitter_archive_master.csv', index=False) # saving the data fram to sqlite file (data base) df = twitter_archive_master
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
Exploratory Data Analysis Research Question 1 : what's the most popular dog_stage ?
counts = df['dog_stage'].value_counts()[1:] uni = counts.index counts # gentarting a list of the loc or the index for each to be replaced by the tick locs = np.arange(len(uni)) plt.bar(locs, counts) plt.xlabel('Dog stage', fontsize=14) plt.ylabel('The number of tweets', fontsize=14) # Set text labels: plt.xticks(locs, uni, fontsize=12, rotation=90) plt.title('the number of tweets for each dog stage', fontsize=16) plt.show() #ploting it in pie chart plt.pie(counts, labels=uni) plt.title('the number of tweets for each dog stage', fontsize=16) plt.legend();
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
by ignoring the number of unknowns, from our data we can see that:> The greatest number of tweets about the dogs are in pupper dogs with 1055 tweets. >> the "doggo"dogs has 335 tweets.>> 115 tweets for puppo dogs.>> 35 tweets for floofer dogs and it's the lowest number of tweets. Research Question 2 : what's the most popular dog_stage according to rating average?
rating = df['rating'].groupby(df['dog_stage']).mean()[:-1].sort_values(ascending=False) rating #polting the values in barchat dog_stage = rating.index plt.bar(dog_stage, rating) plt.xlabel('Dog breed', fontsize=14) plt.ylabel('averge rating', fontsize=14) # Set text labels: plt.title('the number of average rate for each dog breed', fontsize=16) plt.show()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
> from the bar chat we can see that the floofer dogs tweets 1.2 average rating and the puppo tweets have avrage rating 1.197> while the doggo tweets have 1.197 rate. > and the pupper tweets have 1.068 rate. >> Research Question 3 what are the top 10 bead that has the most number of tweets ?
top_breads = df['breed'].value_counts()[:10] topbreds_uni = top_breads.index top_breads # gentarting a list of the loc or the index for each breed to be replaced by the tick locs = np.arange(len(topbreds_uni)) plt.bar(locs, top_breads) plt.xlabel('Dog breed', fontsize=14) plt.ylabel('The number of tweets', fontsize=14) # Set text labels: plt.xticks(locs, topbreds_uni, fontsize=12, rotation=90) plt.title('the number of tweets for each dog breed', fontsize=16) plt.show()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
The top 10 bead that has the most number of tweets are golden_retriever with 750 tweets, labrador_retriever with 495 tweets, pembroke 440 tweets , chihuahua for 405 tweets,285 tweets about pug dogs,220 about chow dogs,210 about samoyed dogs,195 about the toy poodle dogs,190 tweets about pomeranian dogs,150 tweets about the malamute dogs. Research Question 4 what's the number of retweets form each source ?
df['rating'].groupby(df['source']).mean().sort_values(ascending=False) retweets = df['rating'].groupby(df['source']).mean().sort_values(ascending=False) source = retweets.index plt.bar(source, retweets) plt.xlabel('scource', fontsize=14) plt.ylabel('rating average', fontsize=14) plt.title('the number of rating average form each scoure', fontsize=16) plt.show()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
>the average rate source is twitter for iphone with 18.77(about 19) >>then twitter web with 1.008 average rate>> 1.006 average rate from tweet deck Research Question 5 what are the top 4 images that has the most favorite_counts ?
images = df['favorite_count'].groupby(df['img_link']).sum().sort_values(ascending=False).iloc[:4] image_lbl = [] for i in range(len(images)): x = df[df['img_link'] == images.index[i]]['breed'].iloc[0] image_lbl.append(x) dog_stage= [] for i in range(len(images)): x = df[df['img_link'] == images.index[i]]['dog_stage'].iloc[0] dog_stage.append(x) image_lbl, dog_stage plt.bar(image_lbl, images) plt.xlabel('scource', fontsize=14) plt.ylabel('favorite_counts', fontsize=14) plt.title('the number of favorite counts form each scoure', fontsize=16) plt.xticks(image_lbl, image_lbl, fontsize=12, rotation=90) plt.show()
_____no_output_____
MIT
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data