markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Function for the boundary conditions.
def BC(U): """Return the dependent variable with the updated values at the boundaries.""" U[0] = 40.0 U[-1] = 0.0 return U
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
lesson goals * Intro to markdown, plain text-based syntax for formatting docs* markdown is integrated into the jupyter notebook What is markdown? * developed in 2004 by John Gruber - a way of formatting text - a perl utility for converting markdown into html**plain text files** have many advantages of other formats1. they are readable on virt. all devices2. withstood the test of time (legacy word processing formats)by using markdown you'll be able to produce files that are legible in plain text and ready to be styled in other platformsexample: * blogging engines, static site generators, sites like (github) support markdown & will render markdown into html* tools like pandoc convert files into and out of markdown markdown files are saved in extention `.md` and can be opened in text editors like textedit, notepad, sublime text, or vim Headings Four levels of heading are avaiable in Markdown, and are indicated by the number of `` preceding the heading text. Paste the following examples into a code box. ``` First level heading Second level heading Third level heading Fourth level heading``` First level heading Second level heading Third level heading Fourth level heading First and second level headings may also be entered as follows:```First level heading=======Second level heading----------``` First level heading=======Second level heading---------- Paragraphs & Line BreaksTry typing the following sentence into the textbox:```Welcome to the Jupyter Jumpstart.Today we'll be learning about Markdown syntax.This sentence is separated by a single line break from the preceding one.``` Welcome to the Jupyter Jumpstart.Today we'll be learning about Markdown syntax.This sentence is separated by a single line break from the preceding one. NOTE: * Paragraphs must be separated by an empty line* leave an empty line between `syntax` and `This`* some implementations of Markdown, single line breaks must also be indicated with two empty spaces at the end of each line Adding Emphasis * Text can be italicized by wrapping the word in `*` or `_` symbols* bold text is written by wrapping the word in `**` or `_` Try adding emphasis to a sentence using these methods:```I am **very** excited about the _Jupyter Jumpstart_ workshop.``` I am **very** excited about the _Jupyter Jumpstart_ workshop. Making Lists Markdown includes support for ordered and unordered lists. Try typing the following list into the textbox:```Shopping List----------* Fruits * Apples * Oranges * Grapes* Dairy * Milk * Cheese```Indenting the `*` will allow you to created nested items. Shopping List----------* Fruits * Apples - hellow * Oranges * Grapes* Dairy * Milk * Cheese **Ordered lists** are written by numbering each line. Once again, the goal of Markdown is to produce documents that are both legible as plain text and able to be transformed into other formats. ```To-do list----------1. Finish Markdown tutorial2. Go to grocery store3. Prepare lunch``` To-do list----------1. Finish Markdown tutorial2. Go to grocery store3. Going for drinks3. Prepare lunch Code Snippets * Represent code by wrapping snippets in back-tick characters like `````* for example `` `` ``* whole blocks of code are written by typing three backtick characters before and after each blockTry typing the following text into the textbox: ```html Website Title ``` ```html Website Title ``` **specific languages** in jupyter you can specify specific lanauages for code syntax hylightingexample:```pythonfor item in collection: print(item)```note how the keywords in python are highlighted ```pythonfor item in collection: print(item)``` BlockquotesAdding a `>` before any paragraph will render it as a blockquote element.Try typing the following text into the textbox:```> Hello, I am a paragraph of text enclosed in a blockquote. Notice how I am offset from the left margin. ``` > Hello, I am a paragraph of text enclosed in a blockquote. Notice how I am offset from the left margin. Links* Inline links are written by enclosing the link text in square brackets first, then including the URL and optional alt-text in round brackets`For more tutorials, please visit the [Programming Historian](http://programminghistorian.org/ "Programming Historian main page").` [Programming Historian](http://programminghistorian.org/ "Programming Historian main page") ImagesImages can be referenced using `!`, followed by some alt-text in square brackets, followed by the image URL and an optional title. These will not be displayed in your plain text document, but would be embedded into a rendered HTML page.`![Wikipedia logo](http://upload.wikimedia.org/wikipedia/en/8/80/Wikipedia-logo-v2.svg "Wikipedia logo")` ![Wikipedia logo](http://upload.wikimedia.org/wikipedia/en/8/80/Wikipedia-logo-v2.svg "Wikipedia logo") Horizontal RulesHorizontal rules are produced when three or more `-`, `*` or `_` are included on a line by themselves, regardless of the number of spaces between them. All of the following combinations will render horizontal rules:```___* * *- - - - - -``` ___* * *- - - - - - Tables * use pipes `|` to separate columns and hyphens `-` between your headings and the rest of the table content * pipes are only strictly necessary between columns, you may use them on either side of your table for a more polished look * cells can contain any length of content, and it is not necessary for pipes to be vertically aligned with each other.Make the below into a table in the notebook:```| Heading 1 | Heading 2 | Heading 3 || --------- | --------- | --------- || Row 1, column 1 | Row 1, column 2 | Row 1, column 3|| Row 2, column 1 | Row 2, column 2 | Row 2, column 3|| Row 3, column 1 | Row 3, column 2 | Row 3, column 3|``` | Heading 1 | Heading 2 | Heading 3 || --------- | --------- | --------- || Row 1, column 1 | Row 1, column 2 | Row 1, column 3|| Row 2, column 1 | Row 2, column 2 | Row 2, column 3|| Row 3, column 1 | Row 3, column 2 | Row 3, column 3| To specify the alignment of each column, colons `:` can be added to the header row as follows. Create the table in the notebook.```| Left-aligned | Centered | Right-aligned || :-------- | :-------: | --------: || Apples | Red | 5000 || Bananas | Yellow | 75 |``` | Left-aligned | Centered | Right-aligned || :-------- | :-------: | --------: || Apples | Red | 5000 || Bananas | Yellow | 75 |
from IPython import display display.YouTubeVideo('Rc4JQWowG5I') whos display.YouTubeVideo?? help(display.YouTubeVideo)
Help on class YouTubeVideo in module IPython.lib.display: class YouTubeVideo(IFrame) | Class for embedding a YouTube Video in an IPython session, based on its video id. | | e.g. to embed the video from https://www.youtube.com/watch?v=foo , you would | do:: | | vid = YouTubeVideo("foo") | display(vid) | | To start from 30 seconds:: | | vid = YouTubeVideo("abc", start=30) | display(vid) | | To calculate seconds from time as hours, minutes, seconds use | :class:`datetime.timedelta`:: | | start=int(timedelta(hours=1, minutes=46, seconds=40).total_seconds()) | | Other parameters can be provided as documented at | https://developers.google.com/youtube/player_parameters#parameter-subheader | | Method resolution order: | YouTubeVideo | IFrame | builtins.object | | Methods defined here: | | __init__(self, id, width=400, height=300, **kwargs) | Initialize self. See help(type(self)) for accurate signature. | | ---------------------------------------------------------------------- | Data descriptors inherited from IFrame: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) | | ---------------------------------------------------------------------- | Data and other attributes inherited from IFrame: | | iframe = '\n <iframe\n width="{width}"\n ... ...
CC0-1.0
Markdown 101-class.ipynb
uc-data-services/elag2016-jupyter-jumpstart
Trade Strategy __Summary:__ In this code we shall test the results of given model
# Import required libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import os np.random.seed(0) import warnings warnings.filterwarnings('ignore') # User defined names index = "BTC-USD" filename_whole = "whole_dataset"+index+"_xgboost_model.csv" filename_trending = "Trending_dataset"+index+"_xgboost_model.csv" filename_meanreverting = "MeanReverting_dataset"+index+"_xgboost_model.csv" date_col = "Date" Rf = 0.01 #Risk free rate of return # Get current working directory mycwd = os.getcwd() print(mycwd) # Change to data directory os.chdir("..") os.chdir(str(os.getcwd()) + "\\Data") # Read the datasets df_whole = pd.read_csv(filename_whole, index_col=date_col) df_trending = pd.read_csv(filename_trending, index_col=date_col) df_meanreverting = pd.read_csv(filename_meanreverting, index_col=date_col) # Convert index to datetime df_whole.index = pd.to_datetime(df_whole.index) df_trending.index = pd.to_datetime(df_trending.index) df_meanreverting.index = pd.to_datetime(df_meanreverting.index) # Head for whole dataset df_whole.head() df_whole.shape # Head for Trending dataset df_trending.head() df_trending.shape # Head for Mean Reverting dataset df_meanreverting.head() df_meanreverting.shape # Merge results from both models to one df_model = df_trending.append(df_meanreverting) df_model.sort_index(inplace=True) df_model.head() df_model.shape
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
Functions
def initialize(df): days, Action1, Action2, current_status, Money, Shares = ([] for i in range(6)) Open_price = list(df['Open']) Close_price = list(df['Adj Close']) Predicted = list(df['Predicted']) Action1.append(Predicted[0]) Action2.append(0) current_status.append(Predicted[0]) if(Predicted[0] != 0): days.append(1) if(Predicted[0] == 1): Money.append(0) else: Money.append(200) Shares.append(Predicted[0] * (100/Open_price[0])) else: days.append(0) Money.append(100) Shares.append(0) return days, Action1, Action2, current_status, Predicted, Money, Shares, Open_price, Close_price def Action_SA_SA(days, Action1, Action2, current_status, i): if(current_status[i-1] != 0): days.append(1) else: days.append(0) current_status.append(current_status[i-1]) Action1.append(0) Action2.append(0) return days, Action1, Action2, current_status def Action_ZE_NZE(days, Action1, Action2, current_status, i): if(days[i-1] < 5): days.append(days[i-1] + 1) Action1.append(0) Action2.append(0) current_status.append(current_status[i-1]) else: days.append(0) Action1.append(current_status[i-1] * (-1)) Action2.append(0) current_status.append(0) return days, Action1, Action2, current_status def Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i): current_status.append(Predicted[i]) Action1.append(Predicted[i]) Action2.append(0) days.append(days[i-1] + 1) return days, Action1, Action2, current_status def Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i): current_status.append(Predicted[i]) Action1.append(Predicted[i]) Action2.append(Predicted[i]) days.append(1) return days, Action1, Action2, current_status def get_df(df, Action1, Action2, days, current_status, Money, Shares): df['Action1'] = Action1 df['Action2'] = Action2 df['days'] = days df['current_status'] = current_status df['Money'] = Money df['Shares'] = Shares return df def Get_TradeSignal(Predicted, days, Action1, Action2, current_status): # Loop over 1 to N for i in range(1, len(Predicted)): # When model predicts no action.. if(Predicted[i] == 0): if(current_status[i-1] != 0): days, Action1, Action2, current_status = Action_ZE_NZE(days, Action1, Action2, current_status, i) else: days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i) # When Model predicts sell elif(Predicted[i] == -1): if(current_status[i-1] == -1): days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i) elif(current_status[i-1] == 0): days, Action1, Action2, current_status = Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i) else: days, Action1, Action2, current_status = Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i) # When model predicts Buy elif(Predicted[i] == 1): if(current_status[i-1] == 1): days, Action1, Action2, current_status = Action_SA_SA(days, Action1, Action2, current_status, i) elif(current_status[i-1] == 0): days, Action1, Action2, current_status = Action_NZE_ZE(days, Action1, Action2, current_status, Predicted, i) else: days, Action1, Action2, current_status = Action_NZE_NZE(days, Action1, Action2, current_status, Predicted, i) return days, Action1, Action2, current_status def Get_FinancialSignal(Open_price, Action1, Action2, Money, Shares, Close_price): for i in range(1, len(Open_price)): if(Action1[i] == 0): Money.append(Money[i-1]) Shares.append(Shares[i-1]) else: if(Action2[i] == 0): # Enter new position if(Shares[i-1] == 0): Shares.append(Action1[i] * (Money[i-1]/Open_price[i])) Money.append(Money[i-1] - Action1[i] * Money[i-1]) # Exit the current position else: Shares.append(0) Money.append(Money[i-1] - Action1[i] * np.abs(Shares[i-1]) * Open_price[i]) else: Money.append(Money[i-1] -1 *Action1[i] *np.abs(Shares[i-1]) * Open_price[i]) Shares.append(Action2[i] * (Money[i]/Open_price[i])) Money[i] = Money[i] - 1 * Action2[i] * np.abs(Shares[i]) * Open_price[i] return Money, Shares def Get_TradeData(df): # Initialize the variables days,Action1,Action2,current_status,Predicted,Money,Shares,Open_price,Close_price = initialize(df) # Get Buy/Sell trade signal days, Action1, Action2, current_status = Get_TradeSignal(Predicted, days, Action1, Action2, current_status) Money, Shares = Get_FinancialSignal(Open_price, Action1, Action2, Money, Shares, Close_price) df = get_df(df, Action1, Action2, days, current_status, Money, Shares) df['CurrentVal'] = df['Money'] + df['current_status'] * np.abs(df['Shares']) * df['Adj Close'] return df def Print_Fromated_PL(active_days, number_of_trades, drawdown, annual_returns, std_dev, sharpe_ratio, year): """ Prints the metrics """ print("++++++++++++++++++++++++++++++++++++++++++++++++++++") print(" Year: {0}".format(year)) print(" Number of Trades Executed: {0}".format(number_of_trades)) print("Number of days with Active Position: {}".format(active_days)) print(" Annual Return: {:.6f} %".format(annual_returns*100)) print(" Sharpe Ratio: {:.2f}".format(sharpe_ratio)) print(" Maximum Drawdown (Daily basis): {:.2f} %".format(drawdown*100)) print("----------------------------------------------------") return def Get_results_PL_metrics(df, Rf, year): df['tmp'] = np.where(df['current_status'] == 0, 0, 1) active_days = df['tmp'].sum() number_of_trades = np.abs(df['Action1']).sum()+np.abs(df['Action2']).sum() df['tmp_max'] = df['CurrentVal'].rolling(window=20).max() df['tmp_min'] = df['CurrentVal'].rolling(window=20).min() df['tmp'] = np.where(df['tmp_max'] > 0, (df['tmp_max'] - df['tmp_min'])/df['tmp_max'], 0) drawdown = df['tmp'].max() annual_returns = (df['CurrentVal'].iloc[-1]/100 - 1) std_dev = df['CurrentVal'].pct_change(1).std() sharpe_ratio = (annual_returns - Rf)/std_dev Print_Fromated_PL(active_days, number_of_trades, drawdown, annual_returns, std_dev, sharpe_ratio, year) return
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
# Change to Images directory os.chdir("..") os.chdir(str(os.getcwd()) + "\\Images")
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
Whole Dataset
df_whole_train = df_whole[df_whole["Sample"] == "Train"] df_whole_test = df_whole[df_whole["Sample"] == "Test"] df_whole_test_2019 = df_whole_test[df_whole_test.index.year == 2019] df_whole_test_2020 = df_whole_test[df_whole_test.index.year == 2020] output_train_whole = Get_TradeData(df_whole_train) output_test_whole = Get_TradeData(df_whole_test) output_test_whole_2019 = Get_TradeData(df_whole_test_2019) output_test_whole_2020 = Get_TradeData(df_whole_test_2020) output_train_whole["BuyandHold"] = (100 * output_train_whole["Adj Close"])/(output_train_whole.iloc[0]["Adj Close"]) output_test_whole["BuyandHold"] = (100*output_test_whole["Adj Close"])/(output_test_whole.iloc[0]["Adj Close"]) output_test_whole_2019["BuyandHold"] = (100 * output_test_whole_2019["Adj Close"])/(output_test_whole_2019.iloc[0] ["Adj Close"]) output_test_whole_2020["BuyandHold"] = (100 * output_test_whole_2020["Adj Close"])/(output_test_whole_2020.iloc[0] ["Adj Close"]) Get_results_PL_metrics(output_test_whole_2019, Rf, 2019) Get_results_PL_metrics(output_test_whole_2020, Rf, 2020) # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_train_whole["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_train_whole["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Train Sample "+ str(index) + " Xgboost Whole Dataset", fontsize=16) plt.savefig("Train Sample Whole Dataset Xgboost Model" + str(index) +'.png') plt.show() plt.close() # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_test_whole["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_test_whole["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Test Sample "+ str(index) + " Xgboost Whole Dataset", fontsize=16) plt.savefig("Test Sample Whole Dataset XgBoost Model" + str(index) +'.png') plt.show() plt.close()
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
__Comments:__ Based on the performance of model on Train Sample, the model has definitely learnt the patter, instead of over-fitting. But the performance of model in Test Sample is very poor Segment Model
df_model_train = df_model[df_model["Sample"] == "Train"] df_model_test = df_model[df_model["Sample"] == "Test"] df_model_test_2019 = df_model_test[df_model_test.index.year == 2019] df_model_test_2020 = df_model_test[df_model_test.index.year == 2020] output_train_model = Get_TradeData(df_model_train) output_test_model = Get_TradeData(df_model_test) output_test_model_2019 = Get_TradeData(df_model_test_2019) output_test_model_2020 = Get_TradeData(df_model_test_2020) output_train_model["BuyandHold"] = (100 * output_train_model["Adj Close"])/(output_train_model.iloc[0]["Adj Close"]) output_test_model["BuyandHold"] = (100 * output_test_model["Adj Close"])/(output_test_model.iloc[0]["Adj Close"]) output_test_model_2019["BuyandHold"] = (100 * output_test_model_2019["Adj Close"])/(output_test_model_2019.iloc[0] ["Adj Close"]) output_test_model_2020["BuyandHold"] = (100 * output_test_model_2020["Adj Close"])/(output_test_model_2020.iloc[0] ["Adj Close"]) Get_results_PL_metrics(output_test_model_2019, Rf, 2019) Get_results_PL_metrics(output_test_model_2020, Rf, 2020) # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_train_model["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_train_model["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Train Sample Hurst Segment XgBoost Models "+ str(index), fontsize=16) plt.savefig("Train Sample Hurst Segment XgBoost Models" + str(index) +'.png') plt.show() plt.close() # Scatter plot to save fig plt.figure(figsize=(10,5)) plt.plot(output_test_model["CurrentVal"], 'b-', label="Value (Model)") plt.plot(output_test_model["BuyandHold"], 'r--', alpha=0.5, label="Buy and Hold") plt.xlabel("Date", fontsize=12) plt.ylabel("Value", fontsize=12) plt.legend() plt.title("Test Sample Hurst Segment XgBoost Models" + str(index), fontsize=16) plt.savefig("Test Sample Hurst Segment XgBoost Models" + str(index) +'.png') plt.show() plt.close()
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
Interacting with a Car Object In this notebook, you've been given some of the starting code for creating and interacting with a car object.Your tasks are to:1. Become familiar with this code. - Know how to create a car object, and how to move and turn that car.2. Constantly visualize. - To make sure your code is working as expected, frequently call `display_world()` to see the result!3. **Make the car move in a 4x4 square path.** - If you understand the move and turn functions, you should be able to tell a car to move in a square path. This task is a **TODO** at the end of this notebook.Feel free to change the values of initial variables and add functions as you see fit!And remember, to run a cell in the notebook, press `Shift+Enter`.
import numpy as np import car %matplotlib inline
_____no_output_____
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Define the initial variables
# Create a 2D world of 0's height = 4 width = 6 world = np.zeros((height, width)) # Define the initial car state initial_position = [0, 0] # [y, x] (top-left corner) velocity = [0, 1] # [vy, vx] (moving to the right)
_____no_output_____
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Create a car object
# Create a car object with these initial params carla = car.Car(initial_position, velocity, world) print('Carla\'s initial state is: ' + str(carla.state))
Carla's initial state is: [[0, 0], [0, 1]]
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Move and track state
# Move in the direction of the initial velocity carla.move() # Track the change in state print('Carla\'s state is: ' + str(carla.state)) # Display the world carla.display_world()
Carla's state is: [[0, 1], [0, 1]]
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
TODO: Move in a square pathUsing the `move()` and `turn_left()` functions, make carla traverse a 4x4 square path.The output should look like:
## TODO: Make carla traverse a 4x4 square path ## Display the result carla.move() carla.display_world()
_____no_output_____
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Monte Carlo Integration with Python Dr. Tirthajyoti Sarkar ([LinkedIn](https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/), [Github](https://github.com/tirthajyoti)), Fremont, CA, July 2020--- DisclaimerThe inspiration for this demo/notebook stemmed from [Georgia Tech's Online Masters in Analytics (OMSA) program](https://www.gatech.edu/academics/degrees/masters/analytics-online-degree-oms-analytics) study material. I am proud to pursue this excellent Online MS program. You can also check the details [here](http://catalog.gatech.edu/programs/analytics-ms/onlinetext). What is Monte Carlo integration? A casino trick for mathematics![mc-1](https://silversea-h.assetsadobe2.com/is/image/content/dam/silversea-com/ports/m/monte-carlo/silversea-luxury-cruises-monte-carlo.jpg?hei=390&wid=930&fit=crop)Monte Carlo, is in fact, the name of the world-famous casino located in the eponymous district of the city-state (also called a Principality) of Monaco, on the world-famous French Riviera.It turns out that the casino inspired the minds of famous scientists to devise an intriguing mathematical technique for solving complex problems in statistics, numerical computing, system simulation. Modern origin (to make 'The Bomb')![trinity](https://www.nps.gov/whsa/learn/historyculture/images/WHSA_trinity_cloud.jpg?maxwidth=1200&maxheight=1200&autorotate=false)One of the first and most famous uses of this technique was during the Manhattan Project when the chain-reaction dynamics in highly enriched uranium presented an unimaginably complex theoretical calculation to the scientists. Even the genius minds like John Von Neumann, Stanislaw Ulam, Nicholas Metropolis could not tackle it in the traditional way. They, therefore, turned to the wonderful world of random numbers and let these probabilistic quantities tame the originally intractable calculations.Amazingly, these random variables could solve the computing problem, which stymied the sure-footed deterministic approach. The elements of uncertainty actually won.Just like uncertainty and randomness rule in the world of Monte Carlo games. That was the inspiration for this particular moniker. TodayToday, it is a technique used in a wide swath of fields,- risk analysis, financial engineering, - supply chain logistics, - statistical learning and modeling,- computer graphics, image processing, game design,- large system simulations, - computational physics, astronomy, etc.For all its successes and fame, the basic idea is deceptively simple and easy to demonstrate. We demonstrate it in this article with a simple set of Python code. The code and the demo
import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
A simple function which is difficult to integrate analyticallyWhile the general Monte Carlo simulation technique is much broader in scope, we focus particularly on the Monte Carlo integration technique here.It is nothing but a numerical method for computing complex definite integrals, which lack closed-form analytical solutions.Say, we want to calculate,$$\int_{0}^{4}\sqrt[4]{15x^3+21x^2+41x+3}.e^{-0.5x} dx$$
def f1(x): return (15*x**3+21*x**2+41*x+3)**(1/4) * (np.exp(-0.5*x))
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Plot
x = np.arange(0,4.1,0.1) y = f1(x) plt.figure(figsize=(8,4)) plt.title("Plot of the function: $\sqrt[4]{15x^3+21x^2+41x+3}.e^{-0.5x}$", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Riemann sums?There are many such techniques under the general category of [Riemann sum](https://medium.com/r/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FRiemann_sum). The idea is just to divide the area under the curve into small rectangular or trapezoidal pieces, approximate them by the simple geometrical calculations, and sum those components up.For a simple illustration, I show such a scheme with only 5 equispaced intervals.For the programmer friends, in fact, there is a [ready-made function in the Scipy package](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.htmlscipy.integrate.quad) which can do this computation fast and accurately.
rect = np.linspace(0,4,5) plt.figure(figsize=(8,4)) plt.title("Area under the curve: With Riemann sum", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.fill_between(x,y1=y,y2=0,color='orange',alpha=0.6) for i in range(5): plt.vlines(x=rect[i],ymin=0,ymax=2,color='blue') plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
What if I go random?What if I told you that I do not need to pick the intervals so uniformly, and, in fact, I can go completely probabilistic, and pick 100% random intervals to compute the same integral?Crazy talk? My choice of samples could look like this…
rand_lines = 4*np.random.uniform(size=5) plt.figure(figsize=(8,4)) plt.title("With 5 random sampling intervals", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.fill_between(x,y1=y,y2=0,color='orange',alpha=0.6) for i in range(5): plt.vlines(x=rand_lines[i],ymin=0,ymax=2,color='blue') plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Or, this?
rand_lines = 4*np.random.uniform(size=5) plt.figure(figsize=(8,4)) plt.title("With 5 random sampling intervals", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.fill_between(x,y1=y,y2=0,color='orange',alpha=0.6) for i in range(5): plt.vlines(x=rand_lines[i],ymin=0,ymax=2,color='blue') plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
It just works!We don't have the time or scope to prove the theory behind it, but it can be shown that with a reasonably high number of random sampling, we can, in fact, compute the integral with sufficiently high accuracy!We just choose random numbers (between the limits), evaluate the function at those points, add them up, and scale it by a known factor. We are done.OK. What are we waiting for? Let's demonstrate this claim with some simple Python code. A simple version
def monte_carlo(func, a=0, b=1, n=1000): """ Monte Carlo integration """ u = np.random.uniform(size=n) #plt.hist(u) u_func = func(a+(b-a)*u) s = ((b-a)/n)*u_func.sum() return s
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Another version with 10-spaced sampling
def monte_carlo_uniform(func, a=0, b=1, n=1000): """ Monte Carlo integration with more uniform spread (forced) """ subsets = np.arange(0,n+1,n/10) steps = n/10 u = np.zeros(n) for i in range(10): start = int(subsets[i]) end = int(subsets[i+1]) u[start:end] = np.random.uniform(low=i/10,high=(i+1)/10,size=end-start) np.random.shuffle(u) #plt.hist(u) #u = np.random.uniform(size=n) u_func = func(a+(b-a)*u) s = ((b-a)/n)*u_func.sum() return s inte = monte_carlo_uniform(f1,a=0,b=4,n=100) print(inte)
5.73321706375046
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
How good is the calculation anyway?This integral cannot be calculated analytically. So, we need to benchmark the accuracy of the Monte Carlo method against another numerical integration technique anyway. We chose the Scipy `integrate.quad()` function for that.Now, you may also be thinking - **what happens to the accuracy as the sampling density changes**. This choice clearly impacts the computation speed - we need to add less number of quantities if we choose a reduced sampling density.Therefore, we simulated the same integral for a range of sampling density and plotted the result on top of the gold standard - the Scipy function represented as the horizontal line in the plot below,
inte_lst = [] for i in range(100,2100,50): inte = monte_carlo_uniform(f1,a=0,b=4,n=i) inte_lst.append(inte) result,_ = quad(f1,a=0,b=4) plt.figure(figsize=(8,4)) plt.plot([i for i in range(100,2100,50)],inte_lst,color='blue') plt.hlines(y=result,xmin=0,xmax=2100,linestyle='--',lw=3) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel("Sample density for Monte Carlo",fontsize=15) plt.ylabel("Integration result",fontsize=15) plt.grid(True) plt.legend(['Monte Carlo integration','Scipy function'],fontsize=15) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Not bad at all...Therefore, we observe some small perturbations in the low sample density phase, but they smooth out nicely as the sample density increases. In any case, the absolute error is extremely small compared to the value returned by the Scipy function - on the order of 0.02%.The Monte Carlo trick works fantastically! Speed of the Monte Carlo methodIn this particular example, the Monte Carlo calculations are running twice as fast as the Scipy integration method!While this kind of speed advantage depends on many factors, we can be assured that the Monte Carlo technique is not a slouch when it comes to the matter of computation efficiency.
%%timeit -n100 -r100 inte = monte_carlo_uniform(f1,a=0,b=4,n=500)
107 µs ± 6.57 µs per loop (mean ± std. dev. of 100 runs, 100 loops each)
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Speed of the Scipy function
%%timeit -n100 -r100 quad(f1,a=0,b=4)
216 µs ± 5.31 µs per loop (mean ± std. dev. of 100 runs, 100 loops each)
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
RepeatFor a probabilistic technique like Monte Carlo integration, it goes without saying that mathematicians and scientists almost never stop at just one run but repeat the calculations for a number of times and take the average.Here is a distribution plot from a 10,000 run experiment. As you can see, the plot almost resembles a Gaussian Normal distribution and this fact can be utilized to not only get the average value but also construct confidence intervals around that result.
inte_lst = [] for i in range(10000): inte = monte_carlo_uniform(f1,a=0,b=4,n=500) inte_lst.append(inte) plt.figure(figsize=(8,4)) plt.title("Distribution of the Monte Carlo runs", fontsize=18) plt.hist(inte_lst,bins=50,color='orange',edgecolor='k') plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel("Integration result",fontsize=15) plt.ylabel("Density",fontsize=15) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
This illustrates the datasets.make_multilabel_classification dataset generator. Each sample consists of counts of two features (up to 50 in total), which are differently distributed in each of two classes.Points are labeled as follows, where Y means the class is present:| 1 | 2 | 3 | Color ||--- |--- |--- |-------- || Y | N | N | Red || N | Y | N | Blue || N | N | Y | Yellow || Y | Y | N | Purple || Y | N | Y | Orange || Y | Y | N | Green || Y | Y | Y | Brown |A big circle marks the expected sample for each class; its size reflects the probability of selecting that class label.The left and right examples highlight the n_labels parameter: more of the samples in the right plot have 2 or 3 labels.Note that this two-dimensional example is very degenerate: generally the number of features would be much greater than the “document length”, while here we have much larger documents than vocabulary. Similarly, with n_classes > n_features, it is much less likely that a feature distinguishes a particular class. New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version
import sklearn sklearn.__version__
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Imports This tutorial imports [make_ml_clf](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_multilabel_classification.htmlsklearn.datasets.make_multilabel_classification).
import plotly.plotly as py import plotly.graph_objs as go from __future__ import print_function import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_multilabel_classification as make_ml_clf
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Calculations
COLORS = np.array(['!', '#FF3333', # red '#0198E1', # blue '#BF5FFF', # purple '#FCD116', # yellow '#FF7216', # orange '#4DBD33', # green '#87421F' # brown ]) # Use same random seed for multiple calls to make_multilabel_classification to # ensure same distributions RANDOM_SEED = np.random.randint(2 ** 10) def plot_2d(n_labels=1, n_classes=3, length=50): X, Y, p_c, p_w_c = make_ml_clf(n_samples=150, n_features=2, n_classes=n_classes, n_labels=n_labels, length=length, allow_unlabeled=False, return_distributions=True, random_state=RANDOM_SEED) trace1 = go.Scatter(x=X[:, 0], y=X[:, 1], mode='markers', showlegend=False, marker=dict(size=8, color=COLORS.take((Y * [1, 2, 4]).sum(axis=1))) ) trace2 = go.Scatter(x=p_w_c[0] * length, y=p_w_c[1] * length, mode='markers', showlegend=False, marker=dict(color=COLORS.take([1, 2, 4]), size=14, line=dict(width=1, color='black')) ) data = [trace1, trace2] return data, p_c, p_w_c
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Plot Results n_labels=1
data, p_c, p_w_c = plot_2d(n_labels=1) layout=go.Layout(title='n_labels=1, length=50', xaxis=dict(title='Feature 0 count', showgrid=False), yaxis=dict(title='Feature 1 count', showgrid=False), ) fig = go.Figure(data=data, layout=layout) py.iplot(fig)
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
n_labels=3
data = plot_2d(n_labels=3) layout=go.Layout(title='n_labels=3, length=50', xaxis=dict(title='Feature 0 count', showgrid=False), yaxis=dict(title='Feature 1 count', showgrid=False), ) fig = go.Figure(data=data[0], layout=layout) py.iplot(fig) print('The data was generated from (random_state=%d):' % RANDOM_SEED) print('Class', 'P(C)', 'P(w0|C)', 'P(w1|C)', sep='\t') for k, p, p_w in zip(['red', 'blue', 'yellow'], p_c, p_w_c.T): print('%s\t%0.2f\t%0.2f\t%0.2f' % (k, p, p_w[0], p_w[1])) from IPython.display import display, HTML display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />')) display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">')) ! pip install git+https://github.com/plotly/publisher.git --upgrade import publisher publisher.publish( 'randomly-generated-multilabel-dataset.ipynb', 'scikit-learn/plot-random-multilabel-dataset/', 'Randomly Generated Multilabel Dataset | plotly', ' ', title = 'Randomly Generated Multilabel Dataset| plotly', name = 'Randomly Generated Multilabel Dataset', has_thumbnail='true', thumbnail='thumbnail/multilabel-dataset.jpg', language='scikit-learn', page_type='example_index', display_as='dataset', order=4, ipynb= '~Diksha_Gabha/2909')
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Log the concentrations to and learn the models for CaCO3 again to avoid 0 happen in the prediction.
import numpy as np import pandas as pd import dask.dataframe as dd import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot') #plt.style.use('seaborn-whitegrid') plt.style.use('seaborn-colorblind') plt.rcParams['figure.dpi'] = 300 plt.rcParams['savefig.dpi'] = 300 plt.rcParams['savefig.bbox'] = 'tight' import datetime date = datetime.datetime.now().strftime('%Y%m%d') %matplotlib inline
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Launch deployment
from dask.distributed import Client from dask_jobqueue import SLURMCluster cluster = SLURMCluster( project="[email protected]", queue='main', cores=40, memory='10 GB', walltime="00:10:00", log_directory='job_logs' ) client.close() cluster.close() client = Client(cluster) cluster.scale(100) #cluster.adapt(maximum=100) client
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Build model for CaCO3
from dask_ml.model_selection import train_test_split merge_df = dd.read_csv('data/spe+bulk_dataset_20201008.csv') X = merge_df.iloc[:, 1: -5].to_dask_array(lengths=True) X = X / X.sum(axis = 1, keepdims = True) y = merge_df['CaCO3%'].to_dask_array(lengths=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle = True, random_state = 24) y
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Grid searchWe know the relationship between the spectra and bulk measurements might not be linear; and based on the pilot_test.ipynb, the SVR algorithm with NMF transformation provides the better cv score. So we focus on grid search with NMF transformation (4, 5, 6, 7, 8 components based on the PCA result) and SVR. First, we try to build the model on the transformed (ln) y, and evaluate the score on the y transformed back to the original space by using TransformedTargetRegressor. However, it might be something wrong with the parallelism in dask, so we have to do the workflow manually. Transformed (np.log) y_train during training, use the model to predict X_test, transform (np.exp) y_predict back to original space, and evaluate the score.
from dask_ml.model_selection import GridSearchCV from sklearn.decomposition import NMF from sklearn.svm import SVR from sklearn.pipeline import make_pipeline from sklearn.compose import TransformedTargetRegressor pipe = make_pipeline(NMF(max_iter = 2000, random_state = 24), SVR()) params = { 'nmf__n_components': [4, 5, 6, 7, 8], 'svr__C': np.logspace(0, 7, 8), 'svr__gamma': np.logspace(-5, 0, 6) } grid = GridSearchCV(pipe, param_grid = params, cv = 10, n_jobs = -1) grid.fit(X_train, np.log(y_train)) print('The best cv score: {:.3f}'.format(grid.best_score_)) #print('The test score: {:.3f}'.format(grid.best_estimator_.score(X_test, y_test))) print('The best model\'s parameters: {}'.format(grid.best_estimator_)) y_predict = np.exp(grid.best_estimator_.predict(X_test)) y_ttest = np.array(y_test) from sklearn.metrics import r2_score from sklearn.metrics import mean_absolute_error from sklearn.metrics import max_error print('Scores in the test set:') print('R2 = {:.3f} .'.format(r2_score(y_ttest, y_predict))) print('The mean absolute error is {:.3f} (%, concetration).'.format(mean_absolute_error(y_ttest, y_predict))) print('The max. residual error is {:.3f} (%, concetration).'.format(max_error(y_ttest, y_predict))) plt.plot(range(len(y_predict)), y_ttest, alpha=0.6, label='Measurement') plt.plot(range(len(y_predict)), y_predict, label='Prediction (R$^2$={:.2f})'.format(r2_score(y_ttest, y_predict))) #plt.text(12, -7, r'R$^2$={:.2f}, mean ab. error={:.2f}, max. ab. error={:.2f}'.format(grid.best_score_, mean_absolute_error(y_ttest, y_predict), max_error(y_ttest, y_predict))) plt.ylabel('CaCO$_3$ concentration (%)') plt.xlabel('Sample no.') plt.legend(loc = 'upper right') plt.savefig('results/caco3_predictions_nmr+svr_{}.png'.format(date))
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Visualization
#result_df = pd.DataFrame(grid.cv_results_) #result_df.to_csv('results/caco3_grid_nmf+svr_{}.csv'.format(date)) result_df = pd.read_csv('results/caco3_grid_nmf+svr_20201013.csv', index_col = 0) #result_df = result_df[result_df.mean_test_score > -1].reset_index(drop = True) from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm for n_components in [4, 5, 6, 7, 8]: data = result_df[result_df.param_nmf__n_components == n_components].reset_index(drop = True) fig = plt.figure(figsize = (7.3,5)) ax = fig.gca(projection='3d') xx = data.param_svr__gamma.astype(float) yy = data.param_svr__C.astype(float) zz = data.mean_test_score.astype(float) max_index = np.argmax(zz) surf = ax.plot_trisurf(np.log10(xx), np.log10(yy), zz, cmap=cm.Greens, linewidth=0.1) ax.scatter3D(np.log10(xx), np.log10(yy), zz, c = 'orange', s = 5) # mark the best score ax.scatter3D(np.log10(xx), np.log10(yy), zz, c = 'w', s = 5, alpha = 1) text = '{} components\n$\gamma :{:.1f}$, C: {:.1e},\nscore:{:.3f}'.format(n_components, xx[max_index], yy[max_index], zz[max_index]) ax.text(np.log10(xx[max_index])-3, np.log10(yy[max_index]), 1,text, fontsize=12) ax.set_zlim(-.6, 1.2) ax.set_zticks(np.linspace(-.5, 1, 4)) ax.set_xlabel('$log(\gamma)$') ax.set_ylabel('log(C)') ax.set_zlabel('CV score') #fig.colorbar(surf, shrink=0.5, aspect=5) fig.savefig('results/caco3_grid_{}nmr+svr_3D_{}.png'.format(n_components, date)) n_components = [4, 5, 6, 7, 8] scores = [] for n in n_components: data = result_df[result_df.param_nmf__n_components == n].reset_index(drop = True) rank_min = data.rank_test_score.min() scores = np.hstack((scores, data.loc[data.rank_test_score == rank_min, 'mean_test_score'].values)) plt.plot(n_components, scores, marker='o') plt.xticks(n_components) plt.yticks(np.linspace(0.86, 0.875, 4)) plt.xlabel('Amount of components') plt.ylabel('Best CV score') plt.savefig('results/caco3_scores_components_{}.png'.format(date)) from joblib import dump, load #model = load('models/tc_nmf+svr_model_20201012.joblib') dump(grid.best_estimator_, 'models/caco3_nmf+svr_model_{}.joblib'.format(date))
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Check prediction
spe_df = pd.read_csv('data/spe_dataset_20201008.csv', index_col = 0) X = spe_df.iloc[:, :2048].values X = X / X.sum(axis = 1, keepdims = True) y_caco3 = np.exp(grid.best_estimator_.predict(X)) len(y_caco3[y_caco3 < 0]) len(y_caco3[y_caco3 > 100]) len(y_caco3[y_caco3 > 100])/len(y_caco3)
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Kalman Filter Math
#format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style()
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
If you've gotten this far I hope that you are thinking that the Kalman filter's fearsome reputation is somewhat undeserved. Sure, I hand waved some equations away, but I hope implementation has been fairly straightforward for you. The underlying concept is quite straightforward - take two measurements, or a measurement and a prediction, and choose the output to be somewhere between the two. If you believe the measurement more your guess will be closer to the measurement, and if you believe the prediction is more accurate your guess will lie closer to it. That's not rocket science (little joke - it is exactly this math that got Apollo to the moon and back!). To be honest I have been choosing my problems carefully. For an arbitrary problem designing the Kalman filter matrices can be extremely difficult. I haven't been *too tricky*, though. Equations like Newton's equations of motion can be trivially computed for Kalman filter applications, and they make up the bulk of the kind of problems that we want to solve. I have illustrated the concepts with code and reasoning, not math. But there are topics that do require more mathematics than I have used so far. This chapter presents the math that you will need for the rest of the book. Modeling a Dynamic SystemA *dynamic system* is a physical system whose state (position, temperature, etc) evolves over time. Calculus is the math of changing values, so we use differential equations to model dynamic systems. Some systems cannot be modeled with differential equations, but we will not encounter those in this book.Modeling dynamic systems is properly the topic of several college courses. To an extent there is no substitute for a few semesters of ordinary and partial differential equations followed by a graduate course in control system theory. If you are a hobbyist, or trying to solve one very specific filtering problem at work you probably do not have the time and/or inclination to devote a year or more to that education.Fortunately, I can present enough of the theory to allow us to create the system equations for many different Kalman filters. My goal is to get you to the stage where you can read a publication and understand it well enough to implement the algorithms. The background math is deep, but in practice we end up using a few simple techniques. This is the longest section of pure math in this book. You will need to master everything in this section to understand the Extended Kalman filter (EKF), the most common nonlinear filter. I do cover more modern filters that do not require as much of this math. You can choose to skim now, and come back to this if you decide to learn the EKF.We need to start by understanding the underlying equations and assumptions that the Kalman filter uses. We are trying to model real world phenomena, so what do we have to consider?Each physical system has a process. For example, a car traveling at a certain velocity goes so far in a fixed amount of time, and its velocity varies as a function of its acceleration. We describe that behavior with the well known Newtonian equations that we learned in high school.$$\begin{aligned}v&=at\\x &= \frac{1}{2}at^2 + v_0t + x_0\end{aligned}$$Once we learned calculus we saw them in this form:$$ \mathbf v = \frac{d \mathbf x}{d t}, \quad \mathbf a = \frac{d \mathbf v}{d t} = \frac{d^2 \mathbf x}{d t^2}$$A typical automobile tracking problem would have you compute the distance traveled given a constant velocity or acceleration, as we did in previous chapters. But, of course we know this is not all that is happening. No car travels on a perfect road. There are bumps, wind drag, and hills that raise and lower the speed. The suspension is a mechanical system with friction and imperfect springs.Perfectly modeling a system is impossible except for the most trivial problems. We are forced to make a simplification. At any time $t$ we say that the true state (such as the position of our car) is the predicted value from the imperfect model plus some unknown *process noise*:$$x(t) = x_{pred}(t) + noise(t)$$This is not meant to imply that $noise(t)$ is a function that we can derive analytically. It is merely a statement of fact - we can always describe the true value as the predicted value plus the process noise. "Noise" does not imply random events. If we are tracking a thrown ball in the atmosphere, and our model assumes the ball is in a vacuum, then the effect of air drag is process noise in this context.In the next section we will learn techniques to convert a set of higher order differential equations into a set of first-order differential equations. After the conversion the model of the system without noise is:$$ \dot{\mathbf x} = \mathbf{Ax}$$$\mathbf A$ is known as the *systems dynamics matrix* as it describes the dynamics of the system. Now we need to model the noise. We will call that $\mathbf w$, and add it to the equation. $$ \dot{\mathbf x} = \mathbf{Ax} + \mathbf w$$$\mathbf w$ may strike you as a poor choice for the name, but you will soon see that the Kalman filter assumes *white* noise.Finally, we need to consider any inputs into the system. We assume an input $\mathbf u$, and that there exists a linear model that defines how that input changes the system. For example, pressing the accelerator in your car makes it accelerate, and gravity causes balls to fall. Both are contol inputs. We will need a matrix $\mathbf B$ to convert $u$ into the effect on the system. We add that into our equation:$$ \dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu} + \mathbf{w}$$And that's it. That is one of the equations that Dr. Kalman set out to solve, and he found an optimal estimator if we assume certain properties of $\mathbf w$. State-Space Representation of Dynamic Systems We've derived the equation$$ \dot{\mathbf x} = \mathbf{Ax}+ \mathbf{Bu} + \mathbf{w}$$However, we are not interested in the derivative of $\mathbf x$, but in $\mathbf x$ itself. Ignoring the noise for a moment, we want an equation that recusively finds the value of $\mathbf x$ at time $t_k$ in terms of $\mathbf x$ at time $t_{k-1}$:$$\mathbf x(t_k) = \mathbf F(\Delta t)\mathbf x(t_{k-1}) + \mathbf B(t_k) + \mathbf u (t_k)$$Convention allows us to write $\mathbf x(t_k)$ as $\mathbf x_k$, which means the the value of $\mathbf x$ at the k$^{th}$ value of $t$.$$\mathbf x_k = \mathbf{Fx}_{k-1} + \mathbf B_k\mathbf u_k$$$\mathbf F$ is the familiar *state transition matrix*, named due to its ability to transition the state's value between discrete time steps. It is very similar to the system dynamics matrix $\mathbf A$. The difference is that $\mathbf A$ models a set of linear differential equations, and is continuous. $\mathbf F$ is discrete, and represents a set of linear equations (not differential equations) which transitions $\mathbf x_{k-1}$ to $\mathbf x_k$ over a discrete time step $\Delta t$. Finding this matrix is often quite difficult. The equation $\dot x = v$ is the simplest possible differential equation and we trivially integrate it as:$$ \int\limits_{x_{k-1}}^{x_k} \mathrm{d}x = \int\limits_{0}^{\Delta t} v\, \mathrm{d}t $$$$x_k-x_0 = v \Delta t$$$$x_k = v \Delta t + x_0$$This equation is *recursive*: we compute the value of $x$ at time $t$ based on its value at time $t-1$. This recursive form enables us to represent the system (process model) in the form required by the Kalman filter:$$\begin{aligned}\mathbf x_k &= \mathbf{Fx}_{k-1} \\&= \begin{bmatrix} 1 & \Delta t \\ 0 & 1\end{bmatrix}\begin{bmatrix}x_{k-1} \\ \dot x_{k-1}\end{bmatrix}\end{aligned}$$We can do that only because $\dot x = v$ is simplest differential equation possible. Almost all other in physical systems result in more complicated differential equation which do not yield to this approach. *State-space* methods became popular around the time of the Apollo missions, largely due to the work of Dr. Kalman. The idea is simple. Model a system with a set of $n^{th}$-order differential equations. Convert them into an equivalent set of first-order differential equations. Put them into the vector-matrix form used in the previous section: $\dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu}$. Once in this form we use of of several techniques to convert these linear differential equations into the recursive equation:$$ \mathbf x_k = \mathbf{Fx}_{k-1} + \mathbf B_k\mathbf u_k$$Some books call the state transition matrix the *fundamental matrix*. Many use $\mathbf \Phi$ instead of $\mathbf F$. Sources based heavily on control theory tend to use these forms.These are called *state-space* methods because we are expressing the solution of the differential equations in terms of the system state. Forming First Order Equations from Higher Order EquationsMany models of physical systems require second or higher order differential equations with control input $u$:$$a_n \frac{d^ny}{dt^n} + a_{n-1} \frac{d^{n-1}y}{dt^{n-1}} + \dots + a_2 \frac{d^2y}{dt^2} + a_1 \frac{dy}{dt} + a_0 = u$$State-space methods require first-order equations. Any higher order system of equations can be reduced to first-order by defining extra variables for the derivatives and then solving. Let's do an example. Given the system $\ddot{x} - 6\dot x + 9x = u$ find the equivalent first order equations. I've used the dot notation for the time derivatives for clarity.The first step is to isolate the highest order term onto one side of the equation.$$\ddot{x} = 6\dot x - 9x + u$$We define two new variables:$$\begin{aligned} x_1(u) &= x \\x_2(u) &= \dot x\end{aligned}$$Now we will substitute these into the original equation and solve. The solution yields a set of first-order equations in terms of these new variables. It is conventional to drop the $(u)$ for notational convenience.We know that $\dot x_1 = x_2$ and that $\dot x_2 = \ddot{x}$. Therefore$$\begin{aligned}\dot x_2 &= \ddot{x} \\ &= 6\dot x - 9x + t\\ &= 6x_2-9x_1 + t\end{aligned}$$Therefore our first-order system of equations is$$\begin{aligned}\dot x_1 &= x_2 \\\dot x_2 &= 6x_2-9x_1 + t\end{aligned}$$If you practice this a bit you will become adept at it. Isolate the highest term, define a new variable and its derivatives, and then substitute. First Order Differential Equations In State-Space FormSubstituting the newly defined variables from the previous section:$$\frac{dx_1}{dt} = x_2,\, \frac{dx_2}{dt} = x_3, \, ..., \, \frac{dx_{n-1}}{dt} = x_n$$into the first order equations yields: $$\frac{dx_n}{dt} = \frac{1}{a_n}\sum\limits_{i=0}^{n-1}a_ix_{i+1} + \frac{1}{a_n}u$$Using vector-matrix notation we have:$$\begin{bmatrix}\frac{dx_1}{dt} \\ \frac{dx_2}{dt} \\ \vdots \\ \frac{dx_n}{dt}\end{bmatrix} = \begin{bmatrix}\dot x_1 \\ \dot x_2 \\ \vdots \\ \dot x_n\end{bmatrix}=\begin{bmatrix}0 & 1 & 0 &\cdots & 0 \\0 & 0 & 1 & \cdots & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots \\-\frac{a_0}{a_n} & -\frac{a_1}{a_n} & -\frac{a_2}{a_n} & \cdots & -\frac{a_{n-1}}{a_n}\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{bmatrix} + \begin{bmatrix}0 \\ 0 \\ \vdots \\ \frac{1}{a_n}\end{bmatrix}u$$which we then write as $\dot{\mathbf x} = \mathbf{Ax} + \mathbf{B}u$. Finding the Fundamental Matrix for Time Invariant SystemsWe express the system equations in state-space form with$$ \dot{\mathbf x} = \mathbf{Ax}$$where $\mathbf A$ is the system dynamics matrix, and want to find the *fundamental matrix* $\mathbf F$ that propagates the state $\mathbf x$ over the interval $\Delta t$ with the equation$$\begin{aligned}\mathbf x(t_k) = \mathbf F(\Delta t)\mathbf x(t_{k-1})\end{aligned}$$In other words, $\mathbf A$ is a set of continuous differential equations, and we need $\mathbf F$ to be a set of discrete linear equations that computes the change in $\mathbf A$ over a discrete time step.It is conventional to drop the $t_k$ and $(\Delta t)$ and use the notation$$\mathbf x_k = \mathbf {Fx}_{k-1}$$Broadly speaking there are three common ways to find this matrix for Kalman filters. The technique most often used is the matrix exponential. Linear Time Invariant Theory, also known as LTI System Theory, is a second technique. Finally, there are numerical techniques. You may know of others, but these three are what you will most likely encounter in the Kalman filter literature and praxis. The Matrix ExponentialThe solution to the equation $\frac{dx}{dt} = kx$ can be found by:$$\begin{gathered}\frac{dx}{dt} = kx \\\frac{dx}{x} = k\, dt \\\int \frac{1}{x}\, dx = \int k\, dt \\\log x = kt + c \\x = e^{kt+c} \\x = e^ce^{kt} \\x = c_0e^{kt}\end{gathered}$$Using similar math, the solution to the first-order equation $$\dot{\mathbf x} = \mathbf{Ax} ,\, \, \, \mathbf x(0) = \mathbf x_0$$where $\mathbf A$ is a constant matrix, is$$\mathbf x = e^{\mathbf At}\mathbf x_0$$Substituting $F = e^{\mathbf At}$, we can write $$\mathbf x_k = \mathbf F\mathbf x_{k-1}$$which is the form we are looking for! We have reduced the problem of finding the fundamental matrix to one of finding the value for $e^{\mathbf At}$.$e^{\mathbf At}$ is known as the [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential). It can be computed with this power series:$$e^{\mathbf At} = \mathbf{I} + \mathbf{A}t + \frac{(\mathbf{A}t)^2}{2!} + \frac{(\mathbf{A}t)^3}{3!} + ... $$That series is found by doing a Taylor series expansion of $e^{\mathbf At}$, which I will not cover here.Let's use this to find the solution to Newton's equations. Using $v$ as an substitution for $\dot x$, and assuming constant velocity we get the linear matrix-vector form $$\begin{bmatrix}\dot x \\ \dot v\end{bmatrix} =\begin{bmatrix}0&1\\0&0\end{bmatrix} \begin{bmatrix}x \\ v\end{bmatrix}$$This is a first order differential equation, so we can set $\mathbf{A}=\begin{bmatrix}0&1\\0&0\end{bmatrix}$ and solve the following equation. I have substituted the interval $\Delta t$ for $t$ to emphasize that the fundamental matrix is discrete:$$\mathbf F = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A\Delta t)^3}{3!} + ... $$If you perform the multiplication you will find that $\mathbf{A}^2=\begin{bmatrix}0&0\\0&0\end{bmatrix}$, which means that all higher powers of $\mathbf{A}$ are also $\mathbf{0}$. Thus we get an exact answer without an infinite number of terms:$$\begin{aligned}\mathbf F &=\mathbf{I} + \mathbf A \Delta t + \mathbf{0} \\&= \begin{bmatrix}1&0\\0&1\end{bmatrix} + \begin{bmatrix}0&1\\0&0\end{bmatrix}\Delta t\\&= \begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}\end{aligned}$$We plug this into $\mathbf x_k= \mathbf{Fx}_{k-1}$ to get$$\begin{aligned}x_k &=\begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}x_{k-1}\end{aligned}$$You will recognize this as the matrix we derived analytically for the constant velocity Kalman filter in the **Multivariate Kalman Filter** chapter.SciPy's linalg module includes a routine `expm()` to compute the matrix exponential. It does not use the Taylor series method, but the [Padé Approximation](https://en.wikipedia.org/wiki/Pad%C3%A9_approximant). There are many (at least 19) methods to computed the matrix exponential, and all suffer from numerical difficulties[1]. But you should be aware of the problems, especially when $\mathbf A$ is large. If you search for "pade approximation matrix exponential" you will find many publications devoted to this problem. In practice this may not be of concern to you as for the Kalman filter we normally just take the first two terms of the Taylor series. But don't assume my treatment of the problem is complete and run off and try to use this technique for other problem without doing a numerical analysis of the performance of this technique. Interestingly, one of the favored ways of solving $e^{\mathbf At}$ is to use a generalized ode solver. In other words, they do the opposite of what we do - turn $\mathbf A$ into a set of differential equations, and then solve that set using numerical techniques! Here is an example of using `expm()` to solve $e^{\mathbf At}$.
import numpy as np from scipy.linalg import expm dt = 0.1 A = np.array([[0, 1], [0, 0]]) expm(A*dt)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Time InvarianceIf the behavior of the system depends on time we can say that a dynamic system is described by the first-order differential equation$$ g(t) = \dot x$$However, if the system is *time invariant* the equation is of the form:$$ f(x) = \dot x$$What does *time invariant* mean? Consider a home stereo. If you input a signal $x$ into it at time $t$, it will output some signal $f(x)$. If you instead perform the input at time $t + \Delta t$ the output signal will be the same $f(x)$, shifted in time.A counter-example is $x(t) = \sin(t)$, with the system $f(x) = t\, x(t) = t \sin(t)$. This is not time invariant; the value will be different at different times due to the multiplication by t. An aircraft is not time invariant. If you make a control input to the aircraft at a later time its behavior will be different because it will have burned fuel and thus lost weight. Lower weight results in different behavior.We can solve these equations by integrating each side. I demonstrated integrating the time invariant system $v = \dot x$ above. However, integrating the time invariant equation $\dot x = f(x)$ is not so straightforward. Using the *separation of variables* techniques we divide by $f(x)$ and move the $dt$ term to the right so we can integrate each side:$$\begin{gathered}\frac{dx}{dt} = f(x) \\\int^x_{x_0} \frac{1}{f(x)} dx = \int^t_{t_0} dt\end{gathered}$$If we let $F(x) = \int \frac{1}{f(x)} dx$ we get$$F(x) - F(x_0) = t-t_0$$We then solve for x with$$\begin{gathered}F(x) = t - t_0 + F(x_0) \\x = F^{-1}[t-t_0 + F(x_0)]\end{gathered}$$In other words, we need to find the inverse of $F$. This is not trivial, and a significant amount of coursework in a STEM education is devoted to finding tricky, analytic solutions to this problem. However, they are tricks, and many simple forms of $f(x)$ either have no closed form solution or pose extreme difficulties. Instead, the practicing engineer turns to state-space methods to find approximate solutions.The advantage of the matrix exponential is that we can use it for any arbitrary set of differential equations which are *time invariant*. However, we often use this technique even when the equations are not time invariant. As an aircraft flies it burns fuel and loses weight. However, the weight loss over one second is negligible, and so the system is nearly linear over that time step. Our answers will still be reasonably accurate so long as the time step is short. Example: Mass-Spring-Damper ModelSuppose we wanted to track the motion of a weight on a spring and connected to a damper, such as an automobile's suspension. The equation for the motion with $m$ being the mass, $k$ the spring constant, and $c$ the damping force, under some input $u$ is $$m\frac{d^2x}{dt^2} + c\frac{dx}{dt} +kx = u$$For notational convenience I will write that as$$m\ddot x + c\dot x + kx = u$$I can turn this into a system of first order equations by setting $x_1(t)=x(t)$, and then substituting as follows:$$\begin{aligned}x_1 &= x \\x_2 &= \dot x_1 \\\dot x_2 &= \dot x_1 = \ddot x\end{aligned}$$As is common I dropped the $(t)$ for notational convenience. This gives the equation$$m\dot x_2 + c x_2 +kx_1 = u$$Solving for $\dot x_2$ we get a first order equation:$$\dot x_2 = -\frac{c}{m}x_2 - \frac{k}{m}x_1 + \frac{1}{m}u$$We put this into matrix form:$$\begin{bmatrix} \dot x_1 \\ \dot x_2 \end{bmatrix} = \begin{bmatrix}0 & 1 \\ -k/m & -c/m \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 0 \\ 1/m \end{bmatrix}u$$Now we use the matrix exponential to find the state transition matrix:$$\Phi(t) = e^{\mathbf At} = \mathbf{I} + \mathbf At + \frac{(\mathbf At)^2}{2!} + \frac{(\mathbf At)^3}{3!} + ... $$The first two terms give us$$\mathbf F = \begin{bmatrix}1 & t \\ -(k/m) t & 1-(c/m) t \end{bmatrix}$$This may or may not give you enough precision. You can easily check this by computing $\frac{(\mathbf At)^2}{2!}$ for your constants and seeing how much this matrix contributes to the results. Linear Time Invariant Theory[*Linear Time Invariant Theory*](https://en.wikipedia.org/wiki/LTI_system_theory), also known as LTI System Theory, gives us a way to find $\Phi$ using the inverse Laplace transform. You are either nodding your head now, or completely lost. I will not be using the Laplace transform in this book. LTI system theory tells us that $$ \Phi(t) = \mathcal{L}^{-1}[(s\mathbf{I} - \mathbf{F})^{-1}]$$I have no intention of going into this other than to say that the Laplace transform $\mathcal{L}$ converts a signal into a space $s$ that excludes time, but finding a solution to the equation above is non-trivial. If you are interested, the Wikipedia article on LTI system theory provides an introduction. I mention LTI because you will find some literature using it to design the Kalman filter matrices for difficult problems. Numerical SolutionsFinally, there are numerical techniques to find $\mathbf F$. As filters get larger finding analytical solutions becomes very tedious (though packages like SymPy make it easier). C. F. van Loan [2] has developed a technique that finds both $\Phi$ and $\mathbf Q$ numerically. Given the continuous model$$ \dot x = Ax + Gw$$where $w$ is the unity white noise, van Loan's method computes both $\mathbf F_k$ and $\mathbf Q_k$. I have implemented van Loan's method in `FilterPy`. You may use it as follows:```pythonfrom filterpy.common import van_loan_discretizationA = np.array([[0., 1.], [-1., 0.]])G = np.array([[0.], [2.]]) white noise scalingF, Q = van_loan_discretization(A, G, dt=0.1)``` In the section *Numeric Integration of Differential Equations* I present alternative methods which are very commonly used in Kalman filtering. Design of the Process Noise MatrixIn general the design of the $\mathbf Q$ matrix is among the most difficult aspects of Kalman filter design. This is due to several factors. First, the math requires a good foundation in signal theory. Second, we are trying to model the noise in something for which we have little information. Consider trying to model the process noise for a thrown baseball. We can model it as a sphere moving through the air, but that leaves many unknown factors - the wind, ball rotation and spin decay, the coefficient of drag of a ball with stitches, the effects of wind and air density, and so on. We develop the equations for an exact mathematical solution for a given process model, but since the process model is incomplete the result for $\mathbf Q$ will also be incomplete. This has a lot of ramifications for the behavior of the Kalman filter. If $\mathbf Q$ is too small then the filter will be overconfident in its prediction model and will diverge from the actual solution. If $\mathbf Q$ is too large than the filter will be unduly influenced by the noise in the measurements and perform sub-optimally. In practice we spend a lot of time running simulations and evaluating collected data to try to select an appropriate value for $\mathbf Q$. But let's start by looking at the math.Let's assume a kinematic system - some system that can be modeled using Newton's equations of motion. We can make a few different assumptions about this process. We have been using a process model of$$ \dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu} + \mathbf{w}$$where $\mathbf{w}$ is the process noise. Kinematic systems are *continuous* - their inputs and outputs can vary at any arbitrary point in time. However, our Kalman filters are *discrete* (there are continuous forms for Kalman filters, but we do not cover them in this book). We sample the system at regular intervals. Therefore we must find the discrete representation for the noise term in the equation above. This depends on what assumptions we make about the behavior of the noise. We will consider two different models for the noise. Continuous White Noise Model We model kinematic systems using Newton's equations. We have either used position and velocity, or position, velocity, and acceleration as the models for our systems. There is nothing stopping us from going further - we can model jerk, jounce, snap, and so on. We don't do that normally because adding terms beyond the dynamics of the real system degrades the estimate. Let's say that we need to model the position, velocity, and acceleration. We can then assume that acceleration is constant for each discrete time step. Of course, there is process noise in the system and so the acceleration is not actually constant. The tracked object will alter the acceleration over time due to external, unmodeled forces. In this section we will assume that the acceleration changes by a continuous time zero-mean white noise $w(t)$. In other words, we are assuming that the small changes in velocity average to 0 over time (zero-mean). Since the noise is changing continuously we will need to integrate to get the discrete noise for the discretization interval that we have chosen. We will not prove it here, but the equation for the discretization of the noise is$$\mathbf Q = \int_0^{\Delta t} \mathbf F(t)\mathbf{Q_c}\mathbf F^\mathsf{T}(t) dt$$where $\mathbf{Q_c}$ is the continuous noise. This gives us$$\Phi = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$for the fundamental matrix, and$$\mathbf{Q_c} = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix} \Phi_s$$for the continuous process noise matrix, where $\Phi_s$ is the spectral density of the white noise.We could carry out these computations ourselves, but I prefer using SymPy to solve the equation.$$\mathbf{Q_c} = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix} \Phi_s$$
import sympy from sympy import (init_printing, Matrix,MatMul, integrate, symbols) init_printing(use_latex='mathjax') dt, phi = symbols('\Delta{t} \Phi_s') F_k = Matrix([[1, dt, dt**2/2], [0, 1, dt], [0, 0, 1]]) Q_c = Matrix([[0, 0, 0], [0, 0, 0], [0, 0, 1]])*phi Q=sympy.integrate(F_k * Q_c * F_k.T, (dt, 0, dt)) # factor phi out of the matrix to make it more readable Q = Q / phi sympy.MatMul(Q, phi)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
For completeness, let us compute the equations for the 0th order and 1st order equations.
F_k = sympy.Matrix([[1]]) Q_c = sympy.Matrix([[phi]]) print('0th order discrete process noise') sympy.integrate(F_k*Q_c*F_k.T,(dt, 0, dt)) F_k = sympy.Matrix([[1, dt], [0, 1]]) Q_c = sympy.Matrix([[0, 0], [0, 1]])*phi Q = sympy.integrate(F_k * Q_c * F_k.T, (dt, 0, dt)) print('1st order discrete process noise') # factor phi out of the matrix to make it more readable Q = Q / phi sympy.MatMul(Q, phi)
1st order discrete process noise
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Piecewise White Noise ModelAnother model for the noise assumes that the that highest order term (say, acceleration) is constant for the duration of each time period, but differs for each time period, and each of these is uncorrelated between time periods. In other words there is a discontinuous jump in acceleration at each time step. This is subtly different than the model above, where we assumed that the last term had a continuously varying noisy signal applied to it. We will model this as$$f(x)=Fx+\Gamma w$$where $\Gamma$ is the *noise gain* of the system, and $w$ is the constant piecewise acceleration (or velocity, or jerk, etc). Let's start by looking at a first order system. In this case we have the state transition function$$\mathbf{F} = \begin{bmatrix}1&\Delta t \\ 0& 1\end{bmatrix}$$In one time period, the change in velocity will be $w(t)\Delta t$, and the change in position will be $w(t)\Delta t^2/2$, giving us$$\Gamma = \begin{bmatrix}\frac{1}{2}\Delta t^2 \\ \Delta t\end{bmatrix}$$The covariance of the process noise is then$$Q = \mathbb E[\Gamma w(t) w(t) \Gamma^\mathsf{T}] = \Gamma\sigma^2_v\Gamma^\mathsf{T}$$.We can compute that with SymPy as follows
var=symbols('sigma^2_v') v = Matrix([[dt**2 / 2], [dt]]) Q = v * var * v.T # factor variance out of the matrix to make it more readable Q = Q / var sympy.MatMul(Q, var)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
The second order system proceeds with the same math.$$\mathbf{F} = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$Here we will assume that the white noise is a discrete time Wiener process. This gives us$$\Gamma = \begin{bmatrix}\frac{1}{2}\Delta t^2 \\ \Delta t\\ 1\end{bmatrix}$$There is no 'truth' to this model, it is just convenient and provides good results. For example, we could assume that the noise is applied to the jerk at the cost of a more complicated equation. The covariance of the process noise is then$$Q = \mathbb E[\Gamma w(t) w(t) \Gamma^\mathsf{T}] = \Gamma\sigma^2_v\Gamma^\mathsf{T}$$.We can compute that with SymPy as follows
var=symbols('sigma^2_v') v = Matrix([[dt**2 / 2], [dt], [1]]) Q = v * var * v.T # factor variance out of the matrix to make it more readable Q = Q / var sympy.MatMul(Q, var)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
We cannot say that this model is more or less correct than the continuous model - both are approximations to what is happening to the actual object. Only experience and experiments can guide you to the appropriate model. In practice you will usually find that either model provides reasonable results, but typically one will perform better than the other.The advantage of the second model is that we can model the noise in terms of $\sigma^2$ which we can describe in terms of the motion and the amount of error we expect. The first model requires us to specify the spectral density, which is not very intuitive, but it handles varying time samples much more easily since the noise is integrated across the time period. However, these are not fixed rules - use whichever model (or a model of your own devising) based on testing how the filter performs and/or your knowledge of the behavior of the physical model.A good rule of thumb is to set $\sigma$ somewhere from $\frac{1}{2}\Delta a$ to $\Delta a$, where $\Delta a$ is the maximum amount that the acceleration will change between sample periods. In practice we pick a number, run simulations on data, and choose a value that works well. Using FilterPy to Compute QFilterPy offers several routines to compute the $\mathbf Q$ matrix. The function `Q_continuous_white_noise()` computes $\mathbf Q$ for a given value for $\Delta t$ and the spectral density.
from filterpy.common import Q_continuous_white_noise from filterpy.common import Q_discrete_white_noise Q = Q_continuous_white_noise(dim=2, dt=1, spectral_density=1) print(Q) Q = Q_continuous_white_noise(dim=3, dt=1, spectral_density=1) print(Q)
[[ 0.05 0.125 0.167] [ 0.125 0.333 0.5] [ 0.167 0.5 1.0]]
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
The function `Q_discrete_white_noise()` computes $\mathbf Q$ assuming a piecewise model for the noise.
Q = Q_discrete_white_noise(2, var=1.) print(Q) Q = Q_discrete_white_noise(3, var=1.) print(Q)
[[ 0.25 0.5 0.5] [ 0.5 1.0 1.0] [ 0.5 1.0 1.0]]
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Simplification of QMany treatments use a much simpler form for $\mathbf Q$, setting it to zero except for a noise term in the lower rightmost element. Is this justified? Well, consider the value of $\mathbf Q$ for a small $\Delta t$
import numpy as np np.set_printoptions(precision=8) Q = Q_continuous_white_noise( dim=3, dt=0.05, spectral_density=1) print(Q) np.set_printoptions(precision=3)
[[ 0.00000002 0.00000078 0.00002083] [ 0.00000078 0.00004167 0.00125 ] [ 0.00002083 0.00125 0.05 ]]
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
We can see that most of the terms are very small. Recall that the only equation using this matrix is$$ \mathbf P=\mathbf{FPF}^\mathsf{T} + \mathbf Q$$If the values for $\mathbf Q$ are small relative to $\mathbf P$than it will be contributing almost nothing to the computation of $\mathbf P$. Setting $\mathbf Q$ to the zero matrix except for the lower right term$$\mathbf Q=\begin{bmatrix}0&0&0\\0&0&0\\0&0&\sigma^2\end{bmatrix}$$while not correct, is often a useful approximation. If you do this you will have to perform quite a few studies to guarantee that your filter works in a variety of situations. If you do this, 'lower right term' means the most rapidly changing term for each variable. If the state is $x=\begin{bmatrix}x & \dot x & \ddot{x} & y & \dot{y} & \ddot{y}\end{bmatrix}^\mathsf{T}$ Then Q will be 6x6; the elements for both $\ddot{x}$ and $\ddot{y}$ will have to be set to non-zero in $\mathbf Q$. Numeric Integration of Differential Equations We've been exposed to several numerical techniques to solve linear differential equations. These include state-space methods, the Laplace transform, and van Loan's method. These work well for linear ordinary differential equations (ODEs), but do not work well for nonlinear equations. For example, consider trying to predict the position of a rapidly turning car. Cars maneuver by turning the front wheels. This makes them pivot around their rear axle as it moves forward. Therefore the path will be continuously varying and a linear prediction will necessarily produce an incorrect value. If the change in the system is small enough relative to $\Delta t$ this can often produce adequate results, but that will rarely be the case with the nonlinear Kalman filters we will be studying in subsequent chapters. For these reasons we need to know how to numerically integrate ODEs. This can be a vast topic that requires several books. If you need to explore this topic in depth *Computational Physics in Python* by Dr. Eric Ayars is excellent, and available for free here:http://phys.csuchico.edu/ayars/312/Handouts/comp-phys-python.pdfHowever, I will cover a few simple techniques which will work for a majority of the problems you encounter. Euler's MethodLet's say we have the initial condition problem of $$\begin{gathered}y' = y, \\ y(0) = 1\end{gathered}$$We happen to know the exact answer is $y=e^t$ because we solved it earlier, but for an arbitrary ODE we will not know the exact solution. In general all we know is the derivative of the equation, which is equal to the slope. We also know the initial value: at $t=0$, $y=1$. If we know these two pieces of information we can predict the value at $y(t=1)$ using the slope at $t=0$ and the value of $y(0)$. I've plotted this below.
import matplotlib.pyplot as plt t = np.linspace(-1, 1, 10) plt.plot(t, np.exp(t)) t = np.linspace(-1, 1, 2) plt.plot(t,t+1, ls='--', c='k');
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
You can see that the slope is very close to the curve at $t=0.1$, but far from itat $t=1$. But let's continue with a step size of 1 for a moment. We can see that at $t=1$ the estimated value of $y$ is 2. Now we can compute the value at $t=2$ by taking the slope of the curve at $t=1$ and adding it to our initial estimate. The slope is computed with $y'=y$, so the slope is 2.
import code.book_plots as book_plots t = np.linspace(-1, 2, 20) plt.plot(t, np.exp(t)) t = np.linspace(0, 1, 2) plt.plot([1, 2, 4], ls='--', c='k') book_plots.set_labels(x='x', y='y');
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Here we see the next estimate for y is 4. The errors are getting large quickly, and you might be unimpressed. But 1 is a very large step size. Let's put this algorithm in code, and verify that it works by using a small step size.
def euler(t, tmax, y, dx, step=1.): ys = [] while t < tmax: y = y + step*dx(t, y) ys.append(y) t +=step return ys def dx(t, y): return y print(euler(0, 1, 1, dx, step=1.)[-1]) print(euler(0, 2, 1, dx, step=1.)[-1])
2.0 4.0
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
This looks correct. So now let's plot the result of a much smaller step size.
ys = euler(0, 4, 1, dx, step=0.00001) plt.subplot(1,2,1) plt.title('Computed') plt.plot(np.linspace(0, 4, len(ys)),ys) plt.subplot(1,2,2) t = np.linspace(0, 4, 20) plt.title('Exact') plt.plot(t, np.exp(t)); print('exact answer=', np.exp(4)) print('euler answer=', ys[-1]) print('difference =', np.exp(4) - ys[-1]) print('iterations =', len(ys))
exact answer= 54.5981500331 euler answer= 54.59705808834125 difference = 0.00109194480299 iterations = 400000
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Here we see that the error is reasonably small, but it took a very large number of iterations to get three digits of precision. In practice Euler's method is too slow for most problems, and we use more sophisticated methods.Before we go on, let's formally derive Euler's method, as it is the basis for the more advanced Runge Kutta methods used in the next section. In fact, Euler's method is the simplest form of Runge Kutta.Here are the first 3 terms of the Euler expansion of $y$. An infinite expansion would give an exact answer, so $O(h^4)$ denotes the error due to the finite expansion.$$y(t_0 + h) = y(t_0) + h y'(t_0) + \frac{1}{2!}h^2 y''(t_0) + \frac{1}{3!}h^3 y'''(t_0) + O(h^4)$$Here we can see that Euler's method is using the first two terms of the Taylor expansion. Each subsequent term is smaller than the previous terms, so we are assured that the estimate will not be too far off from the correct value. Runge Kutta Methods Runge Kutta is the workhorse of numerical integration. There are a vast number of methods in the literature. In practice, using the Runge Kutta algorithm that I present here will solve most any problem you will face. It offers a very good balance of speed, precision, and stability, and it is the 'go to' numerical integration method unless you have a very good reason to choose something different.Let's dive in. We start with some differential equation$$\ddot{y} = \frac{d}{dt}\dot{y}$$.We can substitute the derivative of y with a function f, like so$$\ddot{y} = \frac{d}{dt}f(y,t)$$. Deriving these equations is outside the scope of this book, but the Runge Kutta RK4 method is defined with these equations.$$y(t+\Delta t) = y(t) + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + O(\Delta t^4)$$$$\begin{aligned}k_1 &= f(y,t)\Delta t \\k_2 &= f(y+\frac{1}{2}k_1, t+\frac{1}{2}\Delta t)\Delta t \\k_3 &= f(y+\frac{1}{2}k_2, t+\frac{1}{2}\Delta t)\Delta t \\k_4 &= f(y+k_3, t+\Delta t)\Delta t\end{aligned}$$Here is the corresponding code:
def runge_kutta4(y, x, dx, f): """computes 4th order Runge-Kutta for dy/dx. y is the initial value for y x is the initial value for x dx is the difference in x (e.g. the time step) f is a callable function (y, x) that you supply to compute dy/dx for the specified values. """ k1 = dx * f(y, x) k2 = dx * f(y + 0.5*k1, x + 0.5*dx) k3 = dx * f(y + 0.5*k2, x + 0.5*dx) k4 = dx * f(y + k3, x + dx) return y + (k1 + 2*k2 + 2*k3 + k4) / 6.
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Let's use this for a simple example. Let$$\dot{y} = t\sqrt{y(t)}$$with the initial values$$\begin{aligned}t_0 &= 0\\y_0 &= y(t_0) = 1\end{aligned}$$
import math import numpy as np t = 0. y = 1. dt = .1 ys, ts = [], [] def func(y,t): return t*math.sqrt(y) while t <= 10: y = runge_kutta4(y, t, dt, func) t += dt ys.append(y) ts.append(t) exact = [(t**2 + 4)**2 / 16. for t in ts] plt.plot(ts, ys) plt.plot(ts, exact) error = np.array(exact) - np.array(ys) print("max error {}".format(max(error)))
max error 5.206970035942504e-05
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Complex Numbers Q1. Return the angle of `a` in radian.
a = 1+1j output = ... print(output)
0.785398163397
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q2. Return the real part and imaginary part of `a`.
a = np.array([1+2j, 3+4j, 5+6j]) real = ... imag = ... print("real part=", real) print("imaginary part=", imag)
real part= [ 1. 3. 5.] imaginary part= [ 2. 4. 6.]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q3. Replace the real part of a with `9`, the imaginary part with `[5, 7, 9]`.
a = np.array([1+2j, 3+4j, 5+6j]) ... ... print(a)
[ 9.+5.j 9.+7.j 9.+9.j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q4. Return the complex conjugate of `a`.
a = 1+2j output = ... print(output)
(1-2j)
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Discrete Fourier Transform Q5. Compuete the one-dimensional DFT of `a`.
a = np.exp(2j * np.pi * np.arange(8)) output = ... print(output)
[ 8.00000000e+00 -6.85802208e-15j 2.36524713e-15 +9.79717439e-16j 9.79717439e-16 +9.79717439e-16j 4.05812251e-16 +9.79717439e-16j 0.00000000e+00 +9.79717439e-16j -4.05812251e-16 +9.79717439e-16j -9.79717439e-16 +9.79717439e-16j -2.36524713e-15 +9.79717439e-16j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q6. Compute the one-dimensional inverse DFT of the `output` in the above question.
print("a=", a) inversed = ... print("inversed=", a)
a= [ 1. +0.00000000e+00j 1. -2.44929360e-16j 1. -4.89858720e-16j 1. -7.34788079e-16j 1. -9.79717439e-16j 1. -1.22464680e-15j 1. -1.46957616e-15j 1. -1.71450552e-15j] inversed= [ 1. +0.00000000e+00j 1. -2.44929360e-16j 1. -4.89858720e-16j 1. -7.34788079e-16j 1. -9.79717439e-16j 1. -1.22464680e-15j 1. -1.46957616e-15j 1. -1.71450552e-15j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q7. Compute the one-dimensional discrete Fourier Transform for real input `a`.
a = [0, 1, 0, 0] output = ... print(output) assert output.size==len(a)//2+1 if len(a)%2==0 else (len(a)+1)//2 # cf. output2 = np.fft.fft(a) print(output2)
[ 1.+0.j 0.-1.j -1.+0.j] [ 1.+0.j 0.-1.j -1.+0.j 0.+1.j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q8. Compute the one-dimensional inverse DFT of the output in the above question.
inversed = ... print("inversed=", a)
inversed= [0, 1, 0, 0]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q9. Return the DFT sample frequencies of `a`.
signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float32) fourier = np.fft.fft(signal) n = signal.size freq = ... print(freq)
[ 0. 0.125 0.25 0.375 -0.5 -0.375 -0.25 -0.125]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Window Functions
fig = plt.figure(figsize=(19, 10)) # Hamming window window = np.hamming(51) plt.plot(np.bartlett(51), label="Bartlett window") plt.plot(np.blackman(51), label="Blackman window") plt.plot(np.hamming(51), label="Hamming window") plt.plot(np.hanning(51), label="Hanning window") plt.plot(np.kaiser(51, 14), label="Kaiser window") plt.xlabel("sample") plt.ylabel("amplitude") plt.legend() plt.grid() plt.show()
_____no_output_____
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Multiple linear regression In many data sets there may be several predictor variables that have an effect on a response variable. In fact, the *interaction* between variables may also be used to predict response. When we incorporate these additional predictor variables into the analysis the model is called *multiple regression* . The multiple regression model builds on the simple linear regression model by adding additional predictors with corresponding parameters. Multiple Regression ModelLet's suppose we are interested in determining what factors might influence a baby's birth weight. In our data set we have information on birth weight, our response, and predictors: mother’s age, weight and height and gestation period. A *main effects model* includes each of the possible predictors but no interactions. Suppose we name these features as in the chart below. | Variable | Description ||----------|:-------------------|| BW | baby birth weight || MA | mother's age || MW | mother's weight || MH | mother's height || GP | gestation period | Then the theoretical main effects multiple regression model is $$BW = \beta_0 + \beta_1 MA + \beta_2 MW + \beta_3 MH + \beta_4 GP+ \epsilon.$$ Now we have five parameters to estimate from the data, $\beta_0, \beta_1, \beta_2, \beta_3$ and $\beta_4$. The random error term, $\epsilon$ has the same interpretation as in simple linear regression and is assumed to come from a normal distribution with mean equal to zero and variance equal to $\sigma^2$. Note that multiple regression also includes the polynomial models discussed in the simple linear regression notebook. One of the most important things to notice about the equation above is that each variable makes a contribution **independently** of the other variables.This is sometimes called **additivity**: the effects of predictor variable are added together to get the total effect on `BW`. Interaction EffectsSuppose in the example, through exploratory data analysis, we discover that younger mothers with long gestational times tend to have heavier babies, but older mother with short gestational times tend to have lighter babies. This could indicate an interaction effect on the response. When there is an interaction effect, the effects of the variables involved are not additive. Different numbers of variables can be involved in an interaction. When two features are involved in the interaction it is called a *two-way interaction* . There are three-way and higher interactions possible as well, but they are less common in practice. The *full model* includes main effects and all interactions. For the example given here there are 6 two-way interactions possible between the variables, 4 possible three-way, and 1 four-way interaction in the full model. Often in practice we fit the full model to check for significant interaction effects. If there are no interactions that are significantly different from zero, we can drop the interaction terms and fit the main effects model to see which of those effects are significant. If interaction effects are significant (important in predicting the behavior of the response) then we will interpret the effects of the model in terms of the interaction. Feature SelectionSuppose we run a full model for the four variables in our example and none of the interaction terms are significant. We then run a main effects model and we get parameter estimates as shown in the table below. | Coefficients | Estimate | Std. Error | p-value ||--------------|----------|------------|---------|| Intercept | 36.69 | 5.97 | 1.44e-6 || MA | 0.36 | 1.00 | 0.7197 || MW | 3.02 | 0.85 | 0.0014 || MH | -0.02 | 0.01 | 0.1792 || GP | -0.81 | 0.66 | 0.2311 | Recall that the p-value is the probability of getting the estimate that we got from the data or something more extreme (further from zero). Small p-values (typically less than 0.05) indicate the associated parameter is different from zero, implying that the associated covariate is important to predict response. In our birth weight example, we see the p-value for the intercept is very low $1.44 \times 10^{-6}$ and so the intercept is not at zero. The mother's weight (MW) has p-value 0.0014 which is very small, indicating that mother's weight has an important (significant) impact on her baby's birth weight. The p-value from all other Wald tests are large: 0.7197, 0.1792, and 0.2311, so we know none of these variables are important when predicting the birth weight. We can modify the coefficient of determination to account for having more than one predictor in the model, called the *adjusted R-square* . R-square has the property that as you add more terms, it will always increase. The adjustment for more terms takes this into consideration. For this data the adjusted R-square is 0.8208, indicating a reasonably good fit. Different combinations of the variables included in the model may give better or worse fits to the data. We can use several methods to select the "best" model for the data. One example is called *forward selection* . This method begins with an empty model (intercept only) and adds variables to the model one by one until the full main effects model is reached. In each forward step, you add the one variable that gives the best improvement to the fit. There is also *backward selection* where you start with the full model and then drop the least important variables one at a time until you are left with the intercept only. If there are not too many features, you can also look at all possible models. Typically these models are compared using the AIC (Akaike information criterion) which measures the relative quality of models. Given a set of models, the preferred model is the one with the minimum AIC value. Previously we talked about splitting the data into training and test sets.In statistics, this is not common, and the models are trained with all the data.This is because statistics is generally more interested in the effect of a particular variable *across the entire dataset* than it is about using that variable to make a prediction about a particular datapoint.Because of this, we typically have concerns about how well linear regression will work with new data, i.e. will it have the same $r^2$ for new data or a lower $r^2$?Both forward and backward selection potentially enhance this problem because they tune the model to the data even more closely by removing variables that aren't "important."You should always be very careful with such variable selection methods and their implications for model generalization. Categorical VariablesIn the birth weight example, there is also information available about the mother's activity level during her pregnancy. Values for this categorical variable are: low, moderate, and high. How can we incorporate these into the model? Since they are not numeric, we have to create *dummy variables* that are numeric to use. A dummy variable represents the presence or absence of a level of the categorical variable by a 1 and the absence by a zero. Fortunately, most software packages that do multiple regression do this for us automatically. Often, one of the levels of the categorical variable is considered the "baseline" and the contributions to the response of the other levels are in relation to baseline.Let's look at the data again. In the table below, the mother's age is dropped and the mother's activity level (MAL) is included. | Coefficients | Estimate | Std. Error | p-value ||--------------|----------|------------|----------|| Intercept | 31.35 | 4.65 | 3.68e-07 || MW | 2.74 | 0.82 | 0.0026 || MH | -0.04 | 0.02 | 0.0420 || GP | 1.11 | 1.03 | 0.2917 || MALmoderate | -2.97 | 1.44 | 0.049 || MALhigh | -1.45 | 2.69 | 0.5946 | For the categorical variable MAL, MAL low has been chosen as the base line. The other two levels have parameter estimates that we can use to determine which are significantly different from the low level. This makes sense because all mothers will at least have low activity level, and the two additional dummy variables `MALhigh` and `MALmoderate` just get added on top of that. We can see that MAL moderate level is significantly different from the low level (p-value < 0.05). The parameter estimate for the moderate level of MAL is -2.97. This can be interpreted as: being in the moderately active group decreases birth weight by 2.97 units compared to babies in the low activity group. We also see that for babies with mothers in the high activity group, their birth weights are not different from birth weights in the low group, since the p-value is not low (0.5946 &gt; 0.05) and so this term does not have a significant effect on the response (birth weight). This example highlights a phenomenon that often happens in multiple regression. When we drop the variable MA (mother's age) from the model and the categorical variable is included, both MW (mother's weight) and MH (mother's height) are both important predictors of birth weight (p-values 0.0026 and 0.0420 respectively). This is why it is important to perform some systematic model selection (forward or backward or all possible) to find an optimum set of features. DiagnosticsAs in the simple linear regression case, we can use the residuals to check the fit of the model. Recall that the residuals are the observed response minus the predicted response. - Plot the residuals against each independent variable to check whether higher order terms are needed - Plot the residuals versus the predicted values to check whether the variance is constant - Plot a qq-plot of the residuals to check for normality MulticollinearityMulticollinearity occurs when two variables or features are linearly related, i.e. they have very strong correlation between them (close to -1 or 1). Practically this means that some of the independent variables are measuring the same thing and are not needed. In the extreme case (close to -1 or 1), the estimates of the parameters of the model cannot be obtained. This is because there is no unique solution for OLS when multicolinearity occurs. As a result, multicollinearity makes conclusions about which features should be used questionable. Example: TreesLet's take a look at a dataset we've seen before `trees` but with an additional tree type added `plum`:| Variable | Type | Description ||----------|-------|:-------------------------------------------------------|| Girth | Ratio | Tree diameter (rather than girth, actually) in inches || Height | Ratio | Height in ft || Volume | Ratio | Volume of timber in cubic ft || Type | Nominal | The type of tree, cherry or plum |Much of what we'll do is the same as with simple linear regression, except:- Converting categorical variables into dummy variables- Different multiple predictors- Interactions Load dataStart with the imports:- `import pandas as pd`
import pandas as pd #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="Vd-20qkN(WN5nJAUj;?4">pd</variable></variables><block type="importAs" id="ji{aK+A5l`eBa?Q1/|Pf" x="128" y="319"><field name="libraryName">pandas</field><field name="libraryAlias" id="Vd-20qkN(WN5nJAUj;?4">pd</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Load the dataframe: - Create variable `dataframe`- Set it to `with pd do read_csv using "datasets/trees2.csv"`- `dataframe` (to display)
dataframe = pd.read_csv('datasets/trees2.csv') dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="Vd-20qkN(WN5nJAUj;?4">pd</variable></variables><block type="variables_set" id="9aUm-oG6/!Z54ivA^qkm" x="2" y="351"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="g.yE$oK%3]$!k91|6U|I"><field name="VAR" id="Vd-20qkN(WN5nJAUj;?4">pd</field><field name="MEMBER">read_csv</field><data>pd:read_csv</data><value name="INPUT"><block type="text" id="fBBU[Z}QCipaz#y=F$!p"><field name="TEXT">datasets/trees2.csv</field></block></value></block></value></block><block type="variables_get" id="pVVu/utZDzpFy(h9Q-+Z" x="6" y="425"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
We know that later on, we'd like to use `Type` as a predictor, so we need to convert it into a dummy variable.However, we'd also like to keep `Type` as a column for our plot labels. There are several ways to do this, but probably the easiest is to save `Type` and then put it back in the dataframe.It will make sense as we go:- Create variable `treeType`- Set it to `dataframe[` list containing `"Type"` `]` (use {dictVariable}[] from LISTS)- `treeType` (to display)
treeType = dataframe[['Type']] treeType #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="hr*VLs~Y+rz.qsB5%AkC">treeType</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="n?M6{W!2xggQx@X7_00@" x="0" y="391"><field name="VAR" id="hr*VLs~Y+rz.qsB5%AkC">treeType</field><value name="VALUE"><block type="indexer" id="3_O9X7-U(%IcMj/dcLIo"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="?V*^3XN6]-U+o1C:Vzq$"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="^a?w!r[mo5(HVwiC0q=4"><field name="TEXT">Type</field></block></value></block></value></block></value></block><block type="variables_get" id="Lvbr[Vv2??Mx*R}-s{,0" x="8" y="470"><field name="VAR" id="hr*VLs~Y+rz.qsB5%AkC">treeType</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
To do the dummy conversion: - Set `dataframe` to `with pd do get_dummies using` a list containing - `dataframe` - freestyle `drop_first=True`- `dataframe` (to display)
dataframe = pd.get_dummies(dataframe, drop_first=True) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="Vd-20qkN(WN5nJAUj;?4">pd</variable></variables><block type="variables_set" id="f~Vi_+$-EAjHP]f_eV;K" x="55" y="193"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="|n$+[JUtgfsvt4?c:yr_"><field name="VAR" id="Vd-20qkN(WN5nJAUj;?4">pd</field><field name="MEMBER">get_dummies</field><data>pd:get_dummies</data><value name="INPUT"><block type="lists_create_with" id="?P;X;R^dn$yjWHW=i7u2"><mutation items="2"></mutation><value name="ADD0"><block type="variables_get" id="Bbsj2h*vF?=ou`pb%n59"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="bMU2}K@krqBgj]d/*N%r"><field name="CODE">drop_first=True</field></block></value></block></value></block></value></block><block type="variables_get" id="2cWY4Drg[bFmM~E#v`]o" x="73" y="293"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Notice that `cherry` is now the base level, so `Type_plum` is in `0` where `cherry` was before and `1` where `plum` was before.To put `Type` back in, use `assign`:- Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `Type=treeType`- `dataframe` (to display)
dataframe = dataframe.assign(Type=treeType) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="asM(PJ)BfN(o4N+9wUt$" x="-18" y="225"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id=";29VMd-(]?GAtxBc4RYY"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">assign</field><data>dataframe:assign</data><value name="INPUT"><block type="lists_create_with" id="~HSVpyu|XuF_=bZz[e./"><mutation items="1"></mutation><value name="ADD0"><block type="dummyOutputCodeBlock" id="0yKT_^W!N#JL!5%=T_+J"><field name="CODE">Type=treeType</field></block></value></block></value></block></value></block><block type="variables_get" id="U)2!3yg#Q,f=4ImV=Pl." x="-3" y="288"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
This is nice - we have our dummy code for modeling but also the nice original lable in `Type` so we don't get confused. Explore dataLet's start with some *overall* descriptive statistics:- `with dataframe do describe using`
dataframe.describe() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="?LJ([email protected],`==|to" x="8" y="188"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">describe</field><data>dataframe:describe</data></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
This is nice, but we suspect there might be some differences between cherry trees and plum trees that this doesn't show.We can `describe` each group as well:- Create variable `groups`- Set it to `with dataframe do groupby using "Type"`
groups = dataframe.groupby('Type') #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="0zfUO$}u$G4I(G1e~N#r">groups</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="kr80`.2l6nJi|eO*fce[" x="44" y="230"><field name="VAR" id="0zfUO$}u$G4I(G1e~N#r">groups</field><value name="VALUE"><block type="varDoMethod" id="x-nB@sYwAL|7o-0;9DUU"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">groupby</field><data>dataframe:groupby</data><value name="INPUT"><block type="text" id="Lby0o8dWqy8ta:56K|bn"><field name="TEXT">Type</field></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Now `describe` groups:- `with groups do describe using`
groups.describe() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="0zfUO$}u$G4I(G1e~N#r">groups</variable></variables><block type="varDoMethod" id="]q4DcYnB3HUf/GehIu+T" x="8" y="188"><field name="VAR" id="0zfUO$}u$G4I(G1e~N#r">groups</field><field name="MEMBER">describe</field><data>groups:describe</data></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Notice this results table has been rotated compared to the normal `describe`.The rows are our two tree types, and the columns are **stacked columns** where the header (e.g. `Girth`) applies to everything below it and to the left (it is not centered).From this we see that the `Girth` is about the same across trees, the `Height` is 13ft different on average, and `Volume` is 5ft different on average. Let's do a plot.We can sneak all the variables into a 2D scatterplot with some clever annotations.First the import:- `import plotly.express as px`
import plotly.express as px #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable></variables><block type="importAs" id="kPF|afHe60B:rsCmJI2O" x="128" y="178"><field name="libraryName">plotly.express</field><field name="libraryAlias" id="k#w4n=KvP~*sLy*OW|Jl">px</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Create the scatterplot:- Create variable `fig`- Set it to `with px do scatter using` a list containing - `dataframe` - freestyle `x="Height"` - freestyle `y="Volume"` - freestyle `color="Type"` - freestyle `size="Girth"`
fig = px.scatter(dataframe, x="Height", y="Volume", color="Type", size="Girth") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="/1x?=CLW;i70@$T5LPN/" x="48" y="337"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><value name="VALUE"><block type="varDoMethod" id="O07?sQIdula@ap]/9Ogq"><field name="VAR" id="k#w4n=KvP~*sLy*OW|Jl">px</field><field name="MEMBER">scatter</field><data>px:scatter</data><value name="INPUT"><block type="lists_create_with" id="~tHtb;Nbw/OP6#7pB9wX"><mutation items="5"></mutation><value name="ADD0"><block type="variables_get" id="UE)!btph,4mdjsf[F37|"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="~L)yq!Jze#v9R[^p;2{O"><field name="CODE">x="Height"</field></block></value><value name="ADD2"><block type="dummyOutputCodeBlock" id="yu5^$n1zXY3)#RcRx:~;"><field name="CODE">y="Volume"</field></block></value><value name="ADD3"><block type="dummyOutputCodeBlock" id="aCZ,k0LzStF1D(+SB2%A"><field name="CODE">color="Type"</field></block></value><value name="ADD4"><block type="dummyOutputCodeBlock" id="4yv:pfYUrA=V0bO}PLcX"><field name="CODE">size="Girth"</field></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And show the figure:- `with fig do show using`
fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable></variables><block type="varDoMethod" id="SV]QMDs*p(4s=2tPrl4a" x="8" y="188"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><field name="MEMBER">show</field><data>fig:show</data></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Modeling 1Last time we looked at `trees`, we used `Height` to predict `Volume`.With multiple linear regression, we can use more that one variable.Let's start with using `Girth` and `Height` to predict `Volume`.But first, the imports:- `import sklearn.linear_model as sklearn`- `import numpy as np`
import sklearn.linear_model as linear_model import numpy as np #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="!+Hi;Yx;ZB!EQYU8ItpO">linear_model</variable><variable id="YynR+H75hTgW`vKfMxOx">np</variable></variables><block type="importAs" id="m;0Uju49an!8G3YKn4cP" x="93" y="288"><field name="libraryName">sklearn.linear_model</field><field name="libraryAlias" id="!+Hi;Yx;ZB!EQYU8ItpO">linear_model</field><next><block type="importAs" id="^iL#`T{6G3.Uxfj*r`Cv"><field name="libraryName">numpy</field><field name="libraryAlias" id="YynR+H75hTgW`vKfMxOx">np</field></block></next></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Create the model:- Create variable `lm` (for linear model)- Set it to `with sklearn create LinearRegression using`
lm = linear_model.LinearRegression() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="!+Hi;Yx;ZB!EQYU8ItpO">linear_model</variable></variables><block type="variables_set" id="!H`J#y,K:4I.h#,HPeK{" x="127" y="346"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><value name="VALUE"><block type="varCreateObject" id="h:O3ZfE(*c[Hz3sF=$Mm"><field name="VAR" id="!+Hi;Yx;ZB!EQYU8ItpO">linear_model</field><field name="MEMBER">LinearRegression</field><data>linear_model:LinearRegression</data></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Train the model using all the data:- `with lm do fit using` a list containing - `dataframe [ ]` (use {dictVariable} from LISTS) containing a list containing - `"Girth"` (this is $X_1$) - `"Height"` (this is $X_2$) - `dataframe [ ]` containing a list containing - `"Volume"` (this is $Y$)
lm.fit(dataframe[['Girth', 'Height']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><field name="MEMBER">fit</field><data>lm:</data><value name="INPUT"><block type="lists_create_with" id="|pmNlB*$t`wI~M5-Nu5]"><mutation items="2"></mutation><value name="ADD0"><block type="indexer" id=".|%fa!U;=I@;!6$?B7Id"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="o5szXy4*HmKGA;-.~H?H"><mutation items="2"></mutation><value name="ADD0"><block type="text" id="{*5MFGJL4(x-JLsuD9qv"><field name="TEXT">Girth</field></block></value><value name="ADD1"><block type="text" id="#cqoT/|u(kuI^=VOHoB@"><field name="TEXT">Height</field></block></value></block></value></block></value><value name="ADD1"><block type="indexer" id="o.R`*;zvaP%^K2/_t`6*"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="[WAkSKWMcU+j3zS)uzVG"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="w0w/T-Wh/df/waYll,rv"><field name="TEXT">Volume</field></block></value></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Go ahead and get the $r^2$ ; you can just copy the blocks from the last cell and change `fit` to `score`.
lm.score(dataframe[['Girth', 'Height']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><field name="MEMBER">score</field><data>lm:score</data><value name="INPUT"><block type="lists_create_with" id="|pmNlB*$t`wI~M5-Nu5]"><mutation items="2"></mutation><value name="ADD0"><block type="indexer" id=".|%fa!U;=I@;!6$?B7Id"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="o5szXy4*HmKGA;-.~H?H"><mutation items="2"></mutation><value name="ADD0"><block type="text" id="{*5MFGJL4(x-JLsuD9qv"><field name="TEXT">Girth</field></block></value><value name="ADD1"><block type="text" id="#cqoT/|u(kuI^=VOHoB@"><field name="TEXT">Height</field></block></value></block></value></block></value><value name="ADD1"><block type="indexer" id="o.R`*;zvaP%^K2/_t`6*"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="[WAkSKWMcU+j3zS)uzVG"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="w0w/T-Wh/df/waYll,rv"><field name="TEXT">Volume</field></block></value></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Based on that $r^2$, we'd think we have a really good model, right? Diagnostics 1To check the model, the first thing we need to do is get the predictions from the model. Once we have the predictions, we can `assign` them to a column in the `dataframe`:- Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `predictions1=` *followed by* - `with lm do predict using` a list containing - `dataframe [ ]` containing a list containing - `"Girth"` - `"Height"`- `dataframe` (to display)**This makes a very long block, so you probably want to create all the blocks and then connect them in reverse order.**
dataframe = dataframe.assign(predictions1= (lm.predict(dataframe[['Girth', 'Height']]))) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-21" y="228"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="ou+aFod:USt{s9i+emN}"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">assign</field><data>dataframe:assign</data><value name="INPUT"><block type="lists_create_with" id="Llv.8Hqls5S/.2ZpnF=D"><mutation items="1"></mutation><value name="ADD0"><block type="valueOutputCodeBlock" id="UFqs+Ox{QF6j*LkUvNvu"><field name="CODE">predictions1=</field><value name="INPUT"><block type="varDoMethod" id="(2l5d}m6K9#ZC6_^/JXe"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><field name="MEMBER">predict</field><data>lm:predict</data><value name="INPUT"><block type="lists_create_with" id="bm@2N5t#Fx`yDxjg~:Nw"><mutation items="1"></mutation><value name="ADD0"><block type="indexer" id="WQaaM]1BPY=1wxWQsv:$"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="Asy|RX,d{QfgBQmjI{@@"><mutation items="2"></mutation><value name="ADD0"><block type="text" id="+5PTgD[9U~pl`q#YlA^!"><field name="TEXT">Girth</field></block></value><value name="ADD1"><block type="text" id="{vo.7:W51MOg?Ef(L-Rn"><field name="TEXT">Height</field></block></value></block></value></block></value></block></value></block></value></block></value></block></value></block></value></block><block type="variables_get" id="+]Ia}Q|FmU.bu*zJ1qHs" x="-13" y="339"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Similarly, we want to add the residuals to `dataframe`: - Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `residuals1=` *followed by* `dataframe [ "Volume" ] - dataframe [ "predictions1" ]`- `dataframe` (to display)**Hint: use {dictVariable}[] and the + block from MATH**
dataframe = dataframe.assign(residuals1= (dataframe['Volume'] - dataframe['predictions1'])) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-28" y="224"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="(2l5d}m6K9#ZC6_^/JXe"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">assign</field><data>dataframe:assign</data><value name="INPUT"><block type="lists_create_with" id="bm@2N5t#Fx`yDxjg~:Nw"><mutation items="1"></mutation><value name="ADD0"><block type="valueOutputCodeBlock" id="^$QWpb1hPzxWt/?~mZBX"><field name="CODE">residuals1=</field><value name="INPUT"><block type="math_arithmetic" id="=szmSC[EoihfyX_5cH6v"><field name="OP">MINUS</field><value name="A"><shadow type="math_number" id="E[2Ss)z+r1pVe~OSDMne"><field name="NUM">1</field></shadow><block type="indexer" id="WQaaM]1BPY=1wxWQsv:$"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="+5PTgD[9U~pl`q#YlA^!"><field name="TEXT">Volume</field></block></value></block></value><value name="B"><shadow type="math_number" id="Z%,Q(P8VED{wb;Q#^bM4"><field name="NUM">1</field></shadow><block type="indexer" id="b.`x=!iTEC%|-VGV[Hu5"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="g`tk1*Psq~biS1z%3c`q"><field name="TEXT">predictions1</field></block></value></block></value></block></value></block></value></block></value></block></value></block><block type="variables_get" id="+]Ia}Q|FmU.bu*zJ1qHs" x="-13" y="339"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Now let's do some plots!Let's check linearity and equal variance:- Linearity means the residuals will be close to zero- Equal variance means residuals will be evenly away from zero- Set `fig` to `with px do scatter using` a list containing - `dataframe` - freestyle `x="predictions1"` - freestyle `y="residuals1"`
fig = px.scatter(dataframe, x="predictions1", y="residuals1") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="/1x?=CLW;i70@$T5LPN/" x="48" y="337"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><value name="VALUE"><block type="varDoMethod" id="O07?sQIdula@ap]/9Ogq"><field name="VAR" id="k#w4n=KvP~*sLy*OW|Jl">px</field><field name="MEMBER">scatter</field><data>px:scatter</data><value name="INPUT"><block type="lists_create_with" id="~tHtb;Nbw/OP6#7pB9wX"><mutation items="3"></mutation><value name="ADD0"><block type="variables_get" id="UE)!btph,4mdjsf[F37|"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="~L)yq!Jze#v9R[^p;2{O"><field name="CODE">x="predictions1"</field></block></value><value name="ADD2"><block type="dummyOutputCodeBlock" id="yu5^$n1zXY3)#RcRx:~;"><field name="CODE">y="residuals1"</field></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And show it:- `with fig do show using`
fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable></variables><block type="varDoMethod" id="SV]QMDs*p(4s=2tPrl4a" x="8" y="188"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><field name="MEMBER">show</field><data>fig:show</data></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
We see something very, very wrong here: a "U" shape from left to right.This means our residuals are positive for low predictions, go negative for mid predictions, and go positive again for high predictions.The only way this can happen is if something is quadratic (squared) in the phenomenon we're trying to model. Modeling 2Step back for a moment and consider what we are trying to do.We are trying to predict volume from other measurements of the tree.What is the formula for volume?$$V = \pi r^2 h$$Since this is the mathematical definition, we don't expect any differences for `plum` vs. `cherry`.What are our variables?- `Volume`- `Girth` (diameter, which is twice $r$)- `Height`In other words, we basically have everything in the formula.Let's create a new column that is closer to what we want, `Girth` * `Girth` * `Height`:- Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `GGH=` *followed by* `dataframe [ "Girth" ] * dataframe [ "Girth" ] * dataframe [ "Height" ]`- `dataframe` (to display)
dataframe = dataframe.assign(GGH= (dataframe['Girth'] * (dataframe['Girth'] * dataframe['Height']))) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-28" y="224"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="(2l5d}m6K9#ZC6_^/JXe"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">assign</field><data>dataframe:assign</data><value name="INPUT"><block type="lists_create_with" id="bm@2N5t#Fx`yDxjg~:Nw"><mutation items="1"></mutation><value name="ADD0"><block type="valueOutputCodeBlock" id="^$QWpb1hPzxWt/?~mZBX"><field name="CODE">GGH=</field><value name="INPUT"><block type="math_arithmetic" id="5RK=q#[GZz]1)F{}r5DR"><field name="OP">MULTIPLY</field><value name="A"><shadow type="math_number"><field name="NUM">1</field></shadow><block type="indexer" id="Xh!r5Y0#k:n+aqBjuvad"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="|4#UlYaNe-aeV+s$,Wn]"><field name="TEXT">Girth</field></block></value></block></value><value name="B"><shadow type="math_number" id=";S0XthTRZu#Q.w|qt88k"><field name="NUM">1</field></shadow><block type="math_arithmetic" id="=szmSC[EoihfyX_5cH6v"><field name="OP">MULTIPLY</field><value name="A"><shadow type="math_number" id="E[2Ss)z+r1pVe~OSDMne"><field name="NUM">1</field></shadow><block type="indexer" id="WQaaM]1BPY=1wxWQsv:$"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="+5PTgD[9U~pl`q#YlA^!"><field name="TEXT">Girth</field></block></value></block></value><value name="B"><shadow type="math_number" id="Z%,Q(P8VED{wb;Q#^bM4"><field name="NUM">1</field></shadow><block type="indexer" id="b.`x=!iTEC%|-VGV[Hu5"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="g`tk1*Psq~biS1z%3c`q"><field name="TEXT">Height</field></block></value></block></value></block></value></block></value></block></value></block></value></block></value></block><block type="variables_get" id="+]Ia}Q|FmU.bu*zJ1qHs" x="-13" y="339"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
As you might have noticed, `GGH` is an interaction. Often when we have interactions, we include the variables that the interactions are made off (also known as **main effects**).However, in this case, that doesn't make sense because we know the interaction is close to the definition of `Volume`.So let's fit a new model using just `GGH`, save it's predictions and residuals, and plot it's predicted vs. residual diagnostic plot.First, fit the model:- `with lm do fit using` a list containing - `dataframe [ ]` containing a list containing - `"GGH"` - `dataframe [ ]` containing a list containing - `"Volume"`
lm.fit(dataframe[['GGH']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><field name="MEMBER">fit</field><data>lm:</data><value name="INPUT"><block type="lists_create_with" id="|pmNlB*$t`wI~M5-Nu5]"><mutation items="2"></mutation><value name="ADD0"><block type="indexer" id=".|%fa!U;=I@;!6$?B7Id"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="o5szXy4*HmKGA;-.~H?H"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="{*5MFGJL4(x-JLsuD9qv"><field name="TEXT">GGH</field></block></value></block></value></block></value><value name="ADD1"><block type="indexer" id="o.R`*;zvaP%^K2/_t`6*"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="[WAkSKWMcU+j3zS)uzVG"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="w0w/T-Wh/df/waYll,rv"><field name="TEXT">Volume</field></block></value></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Diagnostics 2Save the predictions:- Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `predictions2=` *followed by* - `with lm do predict using` a list containing - `dataframe [ ]` containing a list containing - `"GGH"`- `dataframe` (to display)
dataframe = dataframe.assign(predictions2= (lm.predict(dataframe[['GGH']]))) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-21" y="228"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="ou+aFod:USt{s9i+emN}"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">assign</field><data>dataframe:assign</data><value name="INPUT"><block type="lists_create_with" id="Llv.8Hqls5S/.2ZpnF=D"><mutation items="1"></mutation><value name="ADD0"><block type="valueOutputCodeBlock" id="UFqs+Ox{QF6j*LkUvNvu"><field name="CODE">predictions2=</field><value name="INPUT"><block type="varDoMethod" id="(2l5d}m6K9#ZC6_^/JXe"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><field name="MEMBER">predict</field><data>lm:predict</data><value name="INPUT"><block type="lists_create_with" id="bm@2N5t#Fx`yDxjg~:Nw"><mutation items="1"></mutation><value name="ADD0"><block type="indexer" id="WQaaM]1BPY=1wxWQsv:$"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="rugUT!#.Lk(@nt!}4hC;"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="4nD6,I;gq.Y.D%v3$kFX"><field name="TEXT">GGH</field></block></value></block></value></block></value></block></value></block></value></block></value></block></value></block></value></block><block type="variables_get" id="+]Ia}Q|FmU.bu*zJ1qHs" x="-13" y="339"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Save the residuals: - Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `residuals2=` *followed by* `dataframe [ "Volume" ] - dataframe [ "predictions2" ]`- `dataframe` (to display)
dataframe = dataframe.assign(residuals2= (dataframe['Volume'] - dataframe['predictions2'])) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-28" y="224"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="VALUE"><block type="varDoMethod" id="(2l5d}m6K9#ZC6_^/JXe"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">assign</field><data>dataframe:assign</data><value name="INPUT"><block type="lists_create_with" id="bm@2N5t#Fx`yDxjg~:Nw"><mutation items="1"></mutation><value name="ADD0"><block type="valueOutputCodeBlock" id="^$QWpb1hPzxWt/?~mZBX"><field name="CODE">residuals2=</field><value name="INPUT"><block type="math_arithmetic" id="=szmSC[EoihfyX_5cH6v"><field name="OP">MINUS</field><value name="A"><shadow type="math_number" id="E[2Ss)z+r1pVe~OSDMne"><field name="NUM">1</field></shadow><block type="indexer" id="WQaaM]1BPY=1wxWQsv:$"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="+5PTgD[9U~pl`q#YlA^!"><field name="TEXT">Volume</field></block></value></block></value><value name="B"><shadow type="math_number" id="Z%,Q(P8VED{wb;Q#^bM4"><field name="NUM">1</field></shadow><block type="indexer" id="b.`x=!iTEC%|-VGV[Hu5"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="text" id="g`tk1*Psq~biS1z%3c`q"><field name="TEXT">predictions2</field></block></value></block></value></block></value></block></value></block></value></block></value></block><block type="variables_get" id="+]Ia}Q|FmU.bu*zJ1qHs" x="-13" y="339"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And now plot the predicted vs residuals to check linearity and equal variance:- Set `fig` to `with px do scatter using` a list containing - `dataframe` - freestyle `x="predictions2"` - freestyle `y="residuals2"`
fig = px.scatter(dataframe, x="predictions2", y="residuals2") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="/1x?=CLW;i70@$T5LPN/" x="48" y="337"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><value name="VALUE"><block type="varDoMethod" id="O07?sQIdula@ap]/9Ogq"><field name="VAR" id="k#w4n=KvP~*sLy*OW|Jl">px</field><field name="MEMBER">scatter</field><data>px:scatter</data><value name="INPUT"><block type="lists_create_with" id="~tHtb;Nbw/OP6#7pB9wX"><mutation items="3"></mutation><value name="ADD0"><block type="variables_get" id="UE)!btph,4mdjsf[F37|"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="~L)yq!Jze#v9R[^p;2{O"><field name="CODE">x="predictions2"</field></block></value><value name="ADD2"><block type="dummyOutputCodeBlock" id="yu5^$n1zXY3)#RcRx:~;"><field name="CODE">y="residuals2"</field></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And show it:- `with fig do show using`
fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable></variables><block type="varDoMethod" id="SV]QMDs*p(4s=2tPrl4a" x="8" y="188"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><field name="MEMBER">show</field><data>fig:show</data></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
This is a pretty good plot.Most of the residuals are close to zero, and what residuals aren't are fairly evenly spread.We want to see an evenly spaced band above and below 0 as we scan from left to right, and we do.With this new model, calculate $r^2$:
lm.score(dataframe[['GGH']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field name="VAR" id="F]q147x/*m|PMfPQU-lZ">lm</field><field name="MEMBER">score</field><data>lm:score</data><value name="INPUT"><block type="lists_create_with" id="|pmNlB*$t`wI~M5-Nu5]"><mutation items="2"></mutation><value name="ADD0"><block type="indexer" id=".|%fa!U;=I@;!6$?B7Id"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="o5szXy4*HmKGA;-.~H?H"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="{*5MFGJL4(x-JLsuD9qv"><field name="TEXT">GGH</field></block></value></block></value></block></value><value name="ADD1"><block type="indexer" id="o.R`*;zvaP%^K2/_t`6*"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><value name="INDEX"><block type="lists_create_with" id="[WAkSKWMcU+j3zS)uzVG"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="w0w/T-Wh/df/waYll,rv"><field name="TEXT">Volume</field></block></value></block></value></block></value></block></value></block></xml>
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
关于梯度的计算调试
import numpy as np import matplotlib.pyplot as plt np.random.seed(666) X = np.random.random(size=(1000, 10)) true_theta = np.arange(1, 12, dtype=float) X_b = np.hstack([np.ones((len(X), 1)), X]) y = X_b.dot(true_theta) + np.random.normal(size=1000) true_theta X.shape y.shape def J(theta, X_b, y): try: return np.sum((y - X_b.dot(theta))**2) / len(X_b) except: return float('inf') def dJ_math(theta, X_b, y): return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(y) def dJ_debug(theta, X_b, y, epsilon=0.01): res = np.empty(len(theta)) for i in range(len(theta)): theta_1 = theta.copy() theta_1[i] += epsilon theta_2 = theta.copy() theta_2[i] -= epsilon res[i] = (J(theta_1, X_b, y) - J(theta_2, X_b, y)) / (2 * epsilon) return res def gradient_descent(dJ, X_b, y, initial_theta, eta, n_iters = 1e4, epsilon=1e-8): theta = initial_theta cur_iter = 0 while cur_iter < n_iters: gradient = dJ(theta, X_b, y) last_theta = theta theta = theta - eta * gradient if(abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon): break cur_iter += 1 return theta X_b = np.hstack([np.ones((len(X), 1)), X]) initial_theta = np.zeros(X_b.shape[1]) eta = 0.01 %time theta = gradient_descent(dJ_debug, X_b, y, initial_theta, eta) theta %time theta = gradient_descent(dJ_math, X_b, y, initial_theta, eta) theta
CPU times: user 1.57 s, sys: 30.6 ms, total: 1.6 s Wall time: 856 ms
Apache-2.0
06-Gradient-Descent/08-Debug-Gradient/08-Debug-Gradient.ipynb
mtianyan/Mtianyan-Play-with-Machine-Learning-Algorithms
Inaugural Project > **Note the following:** > 1. This is an example of how to structure your **inaugural project**.> 1. Remember the general advice on structuring and commenting your code from [lecture 5](https://numeconcopenhagen.netlify.com/lectures/Workflow_and_debugging).> 1. Remember this [guide](https://www.markdownguide.org/basic-syntax/) on markdown and (a bit of) latex.> 1. Turn on automatic numbering by clicking on the small icon on top of the table of contents in the left sidebar.> 1. The `inauguralproject.py` file includes a function which can be used multiple times in this notebook. Imports and set magics:
import numpy as np # autoreload modules when code is run %load_ext autoreload %autoreload 2 # local modules import inauguralproject
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 1 BRIEFLY EXPLAIN HOW YOU SOLVE THE MODEL.
# code for solving the model (remember documentation and comments) a = np.array([1,2,3]) b = inauguralproject.square(a) print(b)
[1 4 9]
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 2 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 3 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 4 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 5 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Generative Adversarial NetworkIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:* [Pix2Pix](https://affinelayer.com/pixsrv/) * [CycleGAN](https://github.com/junyanz/CycleGAN)* [A whole list](https://github.com/wiseodd/generative-models)The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.![GAN diagram](assets/gan_diagram.png)The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data')
Extracting MNIST_data\train-images-idx3-ubyte.gz Extracting MNIST_data\train-labels-idx1-ubyte.gz Extracting MNIST_data\t10k-images-idx3-ubyte.gz Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Model InputsFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.>**Exercise:** Finish the `model_inputs` function below. Create the placeholders for `inputs_real` and `inputs_z` using the input sizes `real_dim` and `z_dim` respectively.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name="discriminator_inputs") inputs_z = tf.placeholder(tf.float32, (None, z_dim), name="generator_inputs") return inputs_real, inputs_z
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Generator network![GAN Network](assets/gan_network.png)Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable ScopeHere we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks.We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.To use `tf.variable_scope`, you use a `with` statement:```pythonwith tf.variable_scope('scope_name', reuse=False): code here```Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scopethe_problem) to get another look at using `tf.variable_scope`. Leaky ReLUTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`:$$f(x) = max(\alpha * x, x)$$ Tanh OutputThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.>**Exercise:** Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the `reuse` keyword argument from the function to `tf.variable_scope`.
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out: ''' with tf.variable_scope("generator", reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
DiscriminatorThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.>**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the `reuse` keyword argument from the function arguments to `tf.variable_scope`.
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope("discriminator", reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Hyperparameters
# Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Build networkNow we're building the network from the functions defined above.First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes.Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`.>**Exercise:** Build the network from the functions you defined earlier.
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Discriminator and Generator LossesNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like ```pythontf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))```For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)`The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.>**Exercise:** Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
# Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \ labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \ labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \ labels=tf.ones_like(d_logits_fake)))
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist