title
stringlengths 15
126
| category
stringclasses 3
values | posts
list | answered
bool 2
classes |
---|---|---|---|
How to resolve the RuntimeError related to torch.cat() function? | null | [
{
"contents": "I have an attention decoder whose forward function is as follows. <SCODE>def forward(self, input, hidden, encoder_outputs):\n embedded = self.embedding(input).view(1, 1, -1)\n embedded = self.drop(embedded)\n attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1)))\n<ECODE> When ever the forward function gets called, I get the following error. How to resolve this problem? The problem is associated with torch.cat(). Please help.",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "you can add prints to figure out the problematic tensor shape for torch.cat",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "Thanks for your reply. I printed the shape and found the following. <SCODE>embedded = self.embedding(input).view(1, 1, -1)\nembedded = self.drop(embedded)\nprint(embedded[0].size(), hidden[0].size())\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "Spider101"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Spider101"
},
{
"contents": "Now it makes sense! Thanks.",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
}
] | false |
Is there any way to get second order derivative or Hessian Matrix? | null | [
{
"contents": "I am trying to get the Hessian Matrix of weights in a convolutional kernel. However, there is no API which can do the job like Tensorflow.",
"isAccepted": false,
"likes": 2,
"poster": "LiuzcEECS"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "AFAIK TensorFlow will return you a Hessian-vector product like most automatic differentiation software.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "Any updates regarding second order derivatives in PyTorch?",
"isAccepted": false,
"likes": 3,
"poster": "Ilya_Kostrikov"
},
{
"contents": "",
"isAccepted": false,
"likes": 6,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "shijie-wu"
},
{
"contents": "Has there been any update on this? That is, how to get the Hessian (even if just the diagonal) in Pytorch? Would something like this work:",
"isAccepted": false,
"likes": null,
"poster": "adrianalbert"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "vabh"
},
{
"contents": "Thanks, will look into that!",
"isAccepted": false,
"likes": null,
"poster": "adrianalbert"
},
{
"contents": "It looks like torch.autograd does not have a “grad” function? I’m using the latest pytorch (0.1.12_2) ImportError: cannot import name grad",
"isAccepted": false,
"likes": null,
"poster": "adrianalbert"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "vabh"
},
{
"contents": "Got it, thanks! I installed Pytorch from source and now I have access to the grad and backward functions in torch.autograd. However when I try to use grad my kernel just crashes. I use like this: Is this not intended to be used with non-scalar (multiple-dimensional) Tensors? The examples on the github work fine. Thanks!",
"isAccepted": false,
"likes": null,
"poster": "adrianalbert"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "vabh"
},
{
"contents": "I see, that makes sense, given that the model is a CNN with skip connections. Thanks!",
"isAccepted": false,
"likes": null,
"poster": "adrianalbert"
},
{
"contents": "Are there any updates on this? Its been some time. Whats wrong with just doing w.grad.backward()?",
"isAccepted": false,
"likes": null,
"poster": "Brando_Miranda"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "fehiepsi"
},
{
"contents": "<SCODE>import torch\nfrom torch import Tensor\nfrom torch.autograd import Variable\nfrom torch.autograd import grad\nfrom torch import nn\n\ntorch.manual_seed(623)\n\nx = Variable(torch.ones(2,1), requires_grad=True)\nA = torch.FloatTensor([[1,2],[3,4]])\n\nprint(A)\nprint(x)\n\nf = x.view(-1) @ A @ x\nprint(f)\n\n\nx_1grad, = grad(f, x, create_graph=True)\nprint(x_1grad)\nprint(A @ x + A.t() @ x)\n\nx_2grad0, = grad(x_1grad[0], x, create_graph=True)\nx_2grad1, = grad(x_1grad[1], x, create_graph=True)\n\nHessian = torch.cat((x_2grad0, x_2grad1), dim=1)\nprint(Hessian)\n\nprint(A + A.t())\n<ECODE>",
"isAccepted": false,
"likes": 4,
"poster": "Albert_Chen"
},
{
"contents": "I’ve worked on this issue for some time, plz feel free to use it below.",
"isAccepted": false,
"likes": null,
"poster": "JIANG_GUOQING"
}
] | false |
Gradient calculation | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "VladislavPrh"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "VladislavPrh"
}
] | false |
Strangely slow weight loading [fixed] | null | [
{
"contents": "I’m running into a strange issue where loading weights is quite slow. Specifically for the DC-GAN example in the repo, loading the weights for a DC-GAN with 10 latent variables takes 150 seconds which doesn’t seem right given the size of the model. The code to create/load the model is below; is anything obviously wrong here? Thanks! <SCODE>class _netG(nn.Module):\n def __init__(self, ngpu):\n super(_netG, self).__init__()\n self.ngpu = ngpu\n self.main = nn.Sequential(\n # input is Z, going into a convolution\n nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),\n nn.BatchNorm2d(ngf * 8),\n nn.ReLU(True),\n # state size. (ngf*8) x 4 x 4\n nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 4),\n nn.ReLU(True),\n # state size. (ngf*4) x 8 x 8\n nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 2),\n nn.ReLU(True),\n # state size. (ngf*2) x 16 x 16\n nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf),\n nn.ReLU(True),\n # state size. (ngf) x 32 x 32\n nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),\n nn.Tanh()\n # state size. (nc) x 64 x 64\n )\n def forward(self, input): # [omitted for previty]\n\nnetG = _netG(ngpu)\nnetG.apply(weights_init)\nwith rtk.timing.Logger('load weights'):\n if opt.netG != '':\n netG.load_state_dict(torch.load(opt.netG))\nprint(netG)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "tachim"
}
] | false |
Training parameters in nonlinearities | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "yenson-lau"
},
{
"contents": "It all depends how do you want to change it, there’s not a single good recipe. You should be able to implement it as a Python function that operates on Variables, and that will get the gradient computed automatically. Another approach would be to implement your own function. You can find some notes on that in the docs.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
}
] | false |
What is the difference between view() and unsqueeze()? | null | [
{
"contents": "<SCODE>input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3\nprint(input.unsqueeze(0).size()) # prints - torch.size([1, 2, 4, 3])\n<ECODE> <SCODE>input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3\nprint(input.view(1, -1, -1, -1).size()) # prints - torch.size([1, 2, 4, 3])\n<ECODE> Any help with good explanation would be appreciated!",
"isAccepted": true,
"likes": 29,
"poster": "wasiahmad"
},
{
"contents": "Also, in the latest versions of PyTorch you can add a new axis by indexing with None as: <SCODE>>>> input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3\n>>> print(input[None].size())\ntorch.Size([1, 2, 4, 3])\n>>> print(input[:, None].size())\ntorch.Size([2, 1, 4, 3])\n<ECODE>",
"isAccepted": true,
"likes": 25,
"poster": "pranav"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "ecolss"
},
{
"contents": "Yeah, but it works too. You can repeat a tensor along new dimensions as well.",
"isAccepted": true,
"likes": null,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": true,
"likes": 1,
"poster": "nicoliKim"
},
{
"contents": "",
"isAccepted": true,
"likes": 4,
"poster": "alexis-jacq"
},
{
"contents": "In pytorch 0.4, I get the same result for the input input = torch.Tensor(2, 4, 3) and print(input.unsqueeze(0).size()) It seems to me that the two (unsqueeze, view) change only the representation of a tensor.",
"isAccepted": true,
"likes": 3,
"poster": "Wei"
},
{
"contents": "<SCODE>In [1]: import torch \n\nIn [2]: im = torch.Tensor(40,40) \n\nIn [3]: im.size() \nOut[3]: torch.Size([40, 40])\n\nIn [4]: im.view(1,1,-1).size() \nOut[4]: torch.Size([1, 1, 1600])\n<ECODE>",
"isAccepted": true,
"likes": 1,
"poster": "mimoralea"
},
{
"contents": "(1,12), (2,6), (3,4), (6,2), (12,1)",
"isAccepted": true,
"likes": 3,
"poster": "iacob"
}
] | true |
Support for tensordot | null | [
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "shijie-wu"
},
{
"contents": "<SCODE>import torch\nimport numpy as np\n\nM = torch.randn(3, 3, 4)\nv = torch.randn(2, 2, 3)\n\nout = torch.Tensor(np.tensordot(v.numpy(), M.numpy(), axes=[[2], [0]]))\n<ECODE> Not as concise, but it’s there.",
"isAccepted": false,
"likes": null,
"poster": "neverfox"
},
{
"contents": "Not exactly true, because if you want to backprop through it, you will break the graph…",
"isAccepted": false,
"likes": 4,
"poster": "miguelvr"
},
{
"contents": "Ah, yes, that’s true and that would be important under most circumstances.",
"isAccepted": false,
"likes": null,
"poster": "neverfox"
},
{
"contents": "So, is there any workaround for tensordot in pytorch?",
"isAccepted": false,
"likes": null,
"poster": "ajeetk"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "deanmark"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "tom"
}
] | false |
Schema for feeding RNN training data | null | [
{
"contents": "<SCODE> t=0 t=1 t=2 t=3 t=4 t=5 ... t=598 t=599\nsample |---------------------|\nsample |---------------------|\nsample |-----------------\n...\nsample ----|\nsample ----------|\n<ECODE> <SCODE> t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=7 ... t=598 t=599\nsample |-----------|\nsample |-----------|\nsample |-------\n...\nsample -----------|\n<ECODE> Thanks",
"isAccepted": false,
"likes": null,
"poster": "d10genes"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "Thanks for the answer. I’m using time series data rather than LM but it sounds like the 2nd schema is still preferred. Is it important that the sequences are trained in order so that the relevant hidden state is reused, or is it common to shuffle them? It seems the hidden state may not be used anyways if they’re fed in parallel batches.",
"isAccepted": false,
"likes": null,
"poster": "d10genes"
},
{
"contents": "For truncated BPTT training, it’s important that batches be processed in order so that hidden states are preserved.",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ecolss"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "jekbradbury"
}
] | false |
How to build my own modules(change cnn network structure)? | null | [
{
"contents": "<SCODE> I want to build a model like that. But I don't know how to use PyTorch basic modules to do that. I plan to train the horizontal parameters(CNN) at first. Then,train the vertical parameters(RNN). But,I don't know how to combine them together and train this.<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "hongyuan"
},
{
"contents": "This model looks like a Stacked Convolutional Recurrent Neural Network. <SCODE>import torch.nn as nn\nfrom torch.autograd import Variable\n\nclass RNN(nn.Module):\n def __init__(self, input_size, hidden_size, output_size):\n super(RNN, self).__init__()\n \n self.hidden_size = hidden_size\n \n self.i2h = nn.Linear(input_size + hidden_size, hidden_size)\n self.i2o = nn.Linear(input_size + hidden_size, output_size)\n self.softmax = nn.LogSoftmax()\n \n def forward(self, input, hidden):\n combined = torch.cat((input, hidden), 1)\n hidden = self.i2h(combined)\n output = self.i2o(combined)\n output = self.softmax(output)\n return output, hidden\n\n def initHidden(self):\n return Variable(torch.zeros(1, self.hidden_size))\n\nn_hidden = 128\nrnn = RNN(n_letters, n_hidden, n_categories)\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "pranav"
},
{
"contents": "Thank you! I’ll try.",
"isAccepted": false,
"likes": null,
"poster": "hongyuan"
}
] | false |
Cannot figure out how to resolve AssertionError | null | [
{
"contents": "I’ve been trying to figure out how to resolve this error for a couple of hours to no luck. I’ve combed through the documentation, but couldn’t find anything. Can someone explain to me why the AssertionError is being thrown? Here’s the code: <SCODE>import torch\nimport torch.nn as nn\nimport torchvision.transforms as transforms\nfrom torch.autograd import Variable\n\nnum_epochs = 15\nbatch_size = 500\nlearning_rate = 0.003\n\nclass CNN(nn.Module):\n def __init__(self):\n super(CNN, self).__init__()\n self.layer1 = nn.Sequential(\n nn.Conv2d(1, 16, kernel_size=5),\n nn.BatchNorm2d(16),\n nn.ReLU(),\n nn.MaxPool2d(2))\n\n\tdef forward(self, x):\n out = self.layer1(x)\n return out\n\ncnn = CNN()\n\ncriterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(cnn.parameters(), lr=learning_rate)\n\nimg = torch.Tensor(img)\n#img & labl are numpy ndarrays\nlabl = torch.Tensor(labl)\nimage = Variable(img)\nlabel = Variable(labl)\n\noptimizer.zero_grad()\noutput = cnn(image)\n<ECODE> Here’s the error message: <SCODE>Traceback (most recent call last):\n File \"conv_net.py\", line 84, in <module>\n output = cnn(image)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"conv_net.py\", line 50, in forward\n out = self.layer1(x)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/container.py\", line 64, in forward\n input = module(input)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/conv.py\", line 237, in forward\n self.padding, self.dilation, self.groups)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/functional.py\", line 38, in conv2d\n return f(input, weight, bias) if bias is not None else f(input, weight)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py\", line 35, in forward\n output = self._update_output(input, weight, bias)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py\", line 99, in _update_output\n output = self._thnn('update_output', input, weight, bias)\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py\", line 159, in _thnn\n impl = _thnn_convs[self.thnn_class_name(input)]\n File \"/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py\", line 140, in thnn_class_name\n assert input.dim() == 4 or input.dim() == 5\nAssertionError<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "randy"
},
{
"contents": "You can simply add a fake batch dimension with unsqueeze(): img = torch.Tensor(img).unsqueeze(0)",
"isAccepted": false,
"likes": null,
"poster": "alexis-jacq"
}
] | false |
Customized dataloader distorts images | null | [
{
"contents": "Hi, All. Here is my code: <SCODE> # training set\n transform = transforms.Compose([transforms.Scale(28), transforms.ToTensor(), ])\n trainset = USPS(root='./data', train=True, download=False, transform=transform)\n# split data\n trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=False, num_workers=1)\n\n# kernel code in USPS\ndef load(self):\n \n # process and save as torch files\n print('Processing...')\n data = sio.loadmat(os.path.join(self.root,'usps_train.mat'))\n traindata = torch.from_numpy(data['data'].transpose())\n \n traindata = traindata.view(16,16,-1).permute(2,1,0)\n trainlabel = torch.from_numpy(data['labels'])\n\n data = sio.loadmat(os.path.join(self.root,'usps_test.mat'))\n testdata = torch.from_numpy(data['data'].transpose())\n\n testdata = testdata.view(16,16,-1).permute(2,1,0)\n testlabel = torch.from_numpy(data['labels'])\n training_set = (traindata, trainlabel)\n test_set = (testdata, testlabel)\n with open(os.path.join(self.root, self.training_file), 'wb') as f:\n torch.save(training_set, f)\n with open(os.path.join(self.root, self.test_file), 'wb') as f:\n torch.save(test_set, f)\n\n print('Done!')\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "waitwaitforget"
}
] | false |
Single node multiGPU + multiCPU parallelism? | null | [
{
"contents": "I got the DataParallel() example for multi-GPU working successfully for my model. However, I happen to have a 64-core Xeon Phi CPU, and I can’t stand looking at it sitting idle. How can I saturate the CPU by assigning it some work? In other words, can I split the workload on GPU0, GPU1, and CPU0-255 and sum the gradients on CPU? Thank you!",
"isAccepted": false,
"likes": null,
"poster": "FuriouslyCurious"
},
{
"contents": "I don’t know very much about Xeon Phi, but I believe that Intel has an MKL version that will parallelize BLAS calls over the whole chip, making it work like a single GPU device. I would do some experiments to compare speed for each part of your network, then use model parallelism to put submodules on the device they work best on. Or you could subclass/modify the code for DataParallel to allow the CPU (Phi) to be one of the included devices.",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "Thanks James. I had a subclassing script for Keras based on Kuza55’s script: it replicated models to /gpu0, /gpu1, and /cpu0-255. I will look into subclassing data_parallel.py to use both CPU and GPU. What are the device IDs for CPUs inside PyTorch?",
"isAccepted": false,
"likes": null,
"poster": "FuriouslyCurious"
},
{
"contents": "CPUs don’t have device IDs; there’s just one kind of CPU tensor and operations with them are implemented in TH and farmed out to MKL, which would then have its own strategy for parallelizing over the Phi.",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
}
] | false |
Getting started: Basic MLP example (my draft)? | null | [
{
"contents": "Hi, I’ve gone through the PyTorch tutorials, and looked at a couple examples, and I’m still having trouble getting started – I’m just trying to make a basic MLP for now. What I have below is my (existing) Keras version, and then an attempt at a PyTorch version, cobbled together from trying to read the docs and posts on this forum…still not finished because I’m not sure I’m doing this right. <SCODE>def make_model(X, n_hidden, weights_file=\"weights.hdf5\"):\n\n if ('keras' == library): \n model = Sequential()\n model.add(Dense(n_hidden, input_shape=(X.shape[1],), activation='relu', init='he_uniform'))\n model.add(Dense(n_hidden,activation='tanh'))\n model.add(Dense(1))\n model.compile(loss='mse', optimizer='adam', lr=0.001) \n elif ('pytorch' == library):\n class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.hidden = nn.Linear(n_hidden, X.shape[1])\n self.hidden2 = nn.Linear(n_hidden, X.shape[1])\n self.out = nn.Linear(1)\n\n def forward(self, x):\n x = F.relu(self.hidden(x))\n x = F.tanh(self.hidden2(x))\n x = self.out(x)\n return x\n\n model = Net()\n else:\n raise ValueError('Invalid library selection')\n return model\n<ECODE> …And then for training, <SCODE> if ('keras' == library):\n model.fit(X_train, Y_train, nb_epoch=1, batch_size=batch_size, verbose=1, validation_data=(X_test, Y_test), \n callbacks =[ProgbarLogger(),ModelCheckpoint(filepath=weights_file, verbose=1, save_best_only=True)])\n elif ('pytorch' == library):\n input = Variable(X_train, requires_grad=True)\n result = model(input)\n result.backward(torch.randn(result.size()))\n<ECODE> …I don’t see where I actually feed my “Y_train” (target, true) data to PyTorch, with which to compute a loss function. And then for predicting… <SCODE> if ('keras' == library):\n Y_pred = model.predict(X_test)\n elif ('pytorch' == library):\n input = Variable(X_train, requires_grad=True)\n Y_pred = model(input).numpy\n # after which I plot Y_pred along with Y_test using matplotlib\n<ECODE> ?",
"isAccepted": false,
"likes": null,
"poster": "drscotthawley"
},
{
"contents": "Try this: <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.hidden = nn.Linear(X.shape[1], n_hidden)\n self.hidden2 = nn.Linear(n_hidden, n_hidden)\n self.out = nn.Linear(n_hidden, 1)\n\n def forward(self, x):\n x = F.relu(self.hidden(x))\n x = F.tanh(self.hidden2(x))\n x = self.out(x)\n return x\n<ECODE> For the target, you need to give the output of your network together with the targets to a loss function and optimize that.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "Thanks Adam! I incorporated what you wrote, and reviewed the examples/mnist/main.py file, and I’m close to a complete code now. The forward part of the model is generating an error, and I’m wondering if you or anyone else can offer a suggestion: <SCODE>x = [torch.cuda.DoubleTensor of size 20x3 (GPU 0)]\n<ECODE> <SCODE>#! /usr/bin/env python\n# Multilayer Perceptron to learn \"f\" in \"y = f(x,p)\", given lots of (x,y) pairs\n# and a set of parameters p=[...] which affect f(x)\n\nfrom __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.utils.data\nimport torch.optim as optim\nfrom torch.autograd import Variable\n\ndef myfunction(x,p=[1,0]): # function to be learned, with its parameters\n return p[0]*np.sin(100*p[1]*x) # try just a sine wave, with amplitude & frequency\n\n\ndef myfunc_stacked(X):\n Y = []\n for i in range(X.shape[0]):\n x = X[i,0]\n p1 = X[i,1]\n p2 = X[i,2]\n p = [p1,p2]\n Y.append( myfunction(x,p))\n return np.array(Y)\n\n\ndef stack_params(X, p=None): # encapsulates parameters with X\n if p is None:\n p0 = np.random.rand(len(X)) # random values throughout X\n p1 = np.random.rand(len(X)) \n else:\n p0 = np.ones(len(X)) * p[0] # stack copies of params with X\n p1 = np.ones(len(X)) * p[1]\n\n return np.array(list(zip(X,p0,p1)))\n\n\ndef gen_data(n=1000, n_params=2, rand_all=False):\n X = np.linspace(-1.0,1.0,num=n)\n if (not rand_all):\n p = np.random.random(n_params)-0.5\n else:\n p = None\n X = stack_params(X,p)\n Y = myfunc_stacked(X)\n return X, Y, p\n \n\ndef make_model(X, n_hidden):\n\n class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.hidden = nn.Linear(X.shape[1], n_hidden)\n self.hidden2 = nn.Linear(n_hidden, n_hidden)\n self.out = nn.Linear(n_hidden, 1)\n\n def forward(self, x):\n x = F.relu(self.hidden(x))\n x = F.tanh(self.hidden2(x))\n x = self.out(x)\n return x\n\n # Note: \"backward\" is automatically defined by torch.autograd\n \n model = Net()\n\n if torch.cuda.is_available():\n print(\"Using CUDA, number of devices = \",torch.cuda.device_count())\n model.cuda()\n return model\n\n\ndef plot_prediction(X_test, Y_test, Y_pred, epoch, n_epochs, p_test):\n fig=plt.figure()\n plt.clf()\n ax = plt.subplot(1,1,1)\n ax.set_ylim([-1,1])\n plt.title(\"Epoch #\"+str(epoch)+\"/\"+str(n_epochs)+\", p = \"+str(p_test))\n plt.plot(X_test[:,0],Y_test,'b-',label=\"True\")\n plt.plot(X_test[:,0],Y_pred,'r-',label=\"Predicted\")\n plt.legend()\n plt.savefig('progress.png')\n plt.close(fig)\n return\n\n\ndef train(model, epoch, trainloader, criterion, optimizer):\n model.train()\n for batch_idx, (data, target) in enumerate(trainloader):\n if torch.cuda.is_available():\n data, target = data.cuda(), target.cuda()\n data, target = Variable(data), Variable(target)\n optimizer.zero_grad()\n output = model(data)\n loss = criterion(output,target)\n loss.backward()\n optimizer.step()\n if batch_idx % args.log_interval == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), loss.data[0]))\n\n\ndef predict(model,testloader, X_test, Y_test, epoch, n_epochs, p_test):\n model.eval()\n print(\" Plotting....\")\n Y_pred = []\n for data, target in testloader:\n if torch.cuda.is_available():\n data = data.cuda()\n data = Variable(data, volatile=True)\n output = model(data)\n Y_pred.append(output.numpy)\n\n Y_pred = np.array(Y_pred)\n plot_prediction(X_test, Y_test, Y_pred, epoch, n_epochs, p_test)\n\n\ndef main():\n np.random.seed(2)\n\n # parameters for 'size' of run\n n_hidden = 100\n batch_size = 20\n n_train = 10000 \n n_test =1000\n\n print(\"Setting up data\")\n X_train, Y_train, p_train = gen_data(n=n_train, rand_all=True)\n X_test, Y_test, p_test = gen_data(n=n_test)\n\n trainset = torch.utils.data.TensorDataset(torch.from_numpy(X_train),torch.from_numpy(Y_train))\n trainloader = torch.utils.data.DataLoader(trainset, batch_size=20, shuffle=True, num_workers=2)\n testset = torch.utils.data.TensorDataset(torch.from_numpy(X_test),torch.from_numpy(Y_test))\n testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=2)\n\n print(\"Defining model\")\n model = make_model(X_train, n_hidden)\n optimizer = optim.Adam(model.parameters(), lr=0.001)\n criterion = nn.MSELoss()\n\n n_epochs= 10000\n predict_every = 20\n for epoch in range(n_epochs):\n print(\"(Outer) Epoch \",epoch,\" of \",n_epochs,\":\")\n\n train(model, epoch, trainloader, criterion, optimizer)\n\n if (0 == epoch % predict_every):\n predict(model,testloader, X_test, Y_test, epoch, n_epochs, p_test)\n\nif __name__ == '__main__':\n main()<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "drscotthawley"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "Thank you. So that extra float array was the bias! So, doing that conversion of everything to float32, then for some reason the line <SCODE> for batch_idx, (data, target) in enumerate(trainloader):\n<ECODE> The loader is initialized via… <SCODE> trainset = torch.utils.data.TensorDataset(torch.FloatTensor(X_train),torch.FloatTensor(Y_train.astype(float)))\n trainloader = torch.utils.data.DataLoader(trainset, batch_size=20, shuffle=True, num_workers=2)<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "drscotthawley"
},
{
"contents": "Now I can get on to trying to actually solve the intended problem itself. So far this simple network does not learn well!",
"isAccepted": false,
"likes": null,
"poster": "drscotthawley"
}
] | false |
The best method for module cloning with parameter sharing? | null | [
{
"contents": "Then, what would be the best approach for cloning with parameter sharing? I mean, weights and grads will be automatically shared if we just forward the module with different inputs and dealing with the outputs. However, if there are other non-module variables that I want to share, I am not sure what would be the most elegant way. Thanks!",
"isAccepted": false,
"likes": 2,
"poster": "supakjk"
},
{
"contents": "I’d say it really depends on the use case. I can’t think of any general recipe right now.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "<SCODE>def _get_clones(module, N):\n return ModuleList([copy.deepcopy(module) for i in range(N)])\n<ECODE> I hope somebody could some up with a solution. Thank you very much in advance for your help!",
"isAccepted": false,
"likes": null,
"poster": "f10w"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "ptrblck"
},
{
"contents": "Sorry my statement was confusing (but still technically correct I think). Actually the biases are not used in the convolution layers and thus they all point to the same object. To detect shared parameters, I use the following function from fairseq: <SCODE>def _catalog_shared_params(module, memo=None, prefix=\"\"):\n \"\"\"Taken from https://github.com/pytorch/fairseq/blob/main/fairseq/trainer.py\n \"\"\"\n if memo is None:\n first_call = True\n memo = {}\n else:\n first_call = False\n for name, param in module._parameters.items():\n param_prefix = prefix + (\".\" if prefix else \"\") + name\n if param not in memo:\n memo[param] = []\n memo[param].append(param_prefix)\n for name, m in module._modules.items():\n if m is None:\n continue\n submodule_prefix = prefix + (\".\" if prefix else \"\") + name\n _catalog_shared_params(m, memo, submodule_prefix)\n if first_call:\n return [x for x in memo.values() if len(x) > 1]\n<ECODE> <SCODE>model = torchvision.models.__dict__['resnet50']()\nshared_params = _catalog_shared_params(model)\nprint(f'shared_params:\\n{shared_params}')\n<ECODE> Output: <SCODE>shared_params:\n[['conv1.bias', 'layer1.0.conv1.bias', 'layer1.0.conv2.bias', 'layer1.0.conv3.bias', 'layer1.0.downsample.0.bias', 'layer1.1.conv1.bias', 'layer1.1.conv2.bias', 'layer1.1.conv3.bias', 'layer1.2.conv1.bias', 'layer1.2.conv2.bias', 'layer1.2.conv3.bias', 'layer2.0.conv1.bias', 'layer2.0.conv2.bias', 'layer2.0.conv3.bias', 'layer2.0.downsample.0.bias', 'layer2.1.conv1.bias', 'layer2.1.conv2.bias', 'layer2.1.conv3.bias', 'layer2.2.conv1.bias', 'layer2.2.conv2.bias', 'layer2.2.conv3.bias', 'layer2.3.conv1.bias', 'layer2.3.conv2.bias', 'layer2.3.conv3.bias', 'layer3.0.conv1.bias', 'layer3.0.conv2.bias', 'layer3.0.conv3.bias', 'layer3.0.downsample.0.bias', 'layer3.1.conv1.bias', 'layer3.1.conv2.bias', 'layer3.1.conv3.bias', 'layer3.2.conv1.bias', 'layer3.2.conv2.bias', 'layer3.2.conv3.bias', 'layer3.3.conv1.bias', 'layer3.3.conv2.bias', 'layer3.3.conv3.bias', 'layer3.4.conv1.bias', 'layer3.4.conv2.bias', 'layer3.4.conv3.bias', 'layer3.5.conv1.bias', 'layer3.5.conv2.bias', 'layer3.5.conv3.bias', 'layer4.0.conv1.bias', 'layer4.0.conv2.bias', 'layer4.0.conv3.bias', 'layer4.0.downsample.0.bias', 'layer4.1.conv1.bias', 'layer4.1.conv2.bias', 'layer4.1.conv3.bias', 'layer4.2.conv1.bias', 'layer4.2.conv2.bias', 'layer4.2.conv3.bias']]\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "f10w"
},
{
"contents": "I’ve just checked again and it appears that after cloning the new biases are also shared, which is expected and thus the above is actually not a good example of use cases. Let me come back to this question in a future when I have another example. Thanks!",
"isAccepted": false,
"likes": null,
"poster": "f10w"
}
] | false |
How to convolution and deconvolution share weight | null | [
{
"contents": "How to convolution and deconvolution share weights?",
"isAccepted": false,
"likes": null,
"poster": "yichuan9527"
},
{
"contents": "Does this occur in a particular paper?",
"isAccepted": false,
"likes": null,
"poster": "ethancaballero"
}
] | false |
Where is the source code for `torch.Tensor.contiguous()`? | null | [
{
"contents": "data = data.view(bsz, -1).t().contiguous()",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "apaszke"
},
{
"contents": "Thanks a lot!",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
},
{
"contents": "<SCODE>x = torch.randn(5, 4)\nprint(x.stride(), x.is_contiguous())\nprint(x.t().strinde(), x.t().is_contiguous())\nx.view(4, 5) # ok\nx.t().view(4, 5) # fails\n<ECODE>",
"isAccepted": false,
"likes": 4,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "kj7kunal"
},
{
"contents": "Thanks for the clarification. I guess there is a minor typo in the second print: print(x.t().strinde(), x.t().is_contiguous()) should be print(x.t().stride(), x.t().is_contiguous())",
"isAccepted": false,
"likes": null,
"poster": "yassineAlouini"
}
] | false |
LSTM hidden & cell outputs and packed_sequence for variable-length sequence inputs | null | [
{
"contents": "I have three questions regarding LSTM. Thanks!",
"isAccepted": false,
"likes": 8,
"poster": "supakjk"
},
{
"contents": "For a forward RNN, the returned last hidden and cell values are e00 if you don’t use PackedSequence, but they’re ezw if you do. For the backward direction of a bidirectional RNN, they’re axv in both cases, but the RNN will have started at ezw in the PackedSequence case and e00 in the case without it.",
"isAccepted": false,
"likes": 16,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "supakjk"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "cerisara"
},
{
"contents": "Yes, the padding is always assumed to be on the right. There are reasons to put the padding on the left if your model is performing computations on the padding tokens and you want to minimize the distortion, but if there are reasons to use left-padding rather than right-padding when the padding tokens won’t be used, we must have overlooked them.",
"isAccepted": false,
"likes": 2,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "cerisara"
},
{
"contents": "I have a question. What does it mean “padding on the right”? In real situation, “a” is represented by a vector, right? so one sample “x y z 0 0” is represented by a matrix. So what you mean “padding on the right” is “padding several all-zero lines under the useful data”. Am I right?",
"isAccepted": false,
"likes": null,
"poster": "Nick_Young"
},
{
"contents": "Yes. The padding is on the right if the timesteps of the sentence are left to right and you think of the vector of x or a as going into the screen.",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "I can not understand how it works. I saw doc in pytorch but I couldn’t figure it out. is there anyone who help me??",
"isAccepted": false,
"likes": 1,
"poster": "JooSung_Yoon"
},
{
"contents": "providing initial hidden state is independent of whether you give packed sequences or not. If you dont give initial hidden state, it is initialized to zero.",
"isAccepted": false,
"likes": 3,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "yukw777"
}
] | false |
Combine / concat dataset instances | null | [
{
"contents": "I came up with two ideas: <SCODE>class Concat(Dataset):\n\n def __init__(self, datasets):\n self.datasets = datasets\n self.lengths = [len(d) for d in datasets]\n self.offsets = np.cumsum(self.lengths)\n self.length = np.sum(self.lengths)\n\n def __getitem__(self, index):\n for i, offset in enumerate(self.offsets):\n if index < offset:\n if i > 0:\n index -= self.offsets[i-1]\n return self.datasets[i][index]\n raise IndexError(f'{index} exceeds {self.length}')\n\n def __len__(self):\n return self.length\n<ECODE> <SCODE>loader = itertools.chain(*[MyDataset(f'file{i}') for i in range(1, 4)])\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "bodokaiser"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "<SCODE>x = itertools.repeat(itertools.chain.from_iterable([dataset1, dataset2]), times=epochs)\n\nnext(next(iter(x))\n\n# or \n\nfor epoch in x:\n for (inputs, targets) in epoch:\n print(inputs)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "bodokaiser"
},
{
"contents": "<SCODE>for epoch in range(num_epochs):\n dset = itertools.chain(...)\n dloader = # create DataLoader\n for ... in dloader:\n ...\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "acgtyrant"
},
{
"contents": "",
"isAccepted": false,
"likes": 9,
"poster": "jnhwkim"
},
{
"contents": "<SCODE>def return_all_items(dataset):\n all_items = []\n for i in range(len(dataset)):\n all_items.append(dataset[i])\n return all_items\n<ECODE> <SCODE>list1 = return_all_items(original_data)\nlist2 = return_all_items(transformed_data)\nlist1.extend(list2)\n<ECODE> then converting the list to dataset object <SCODE>class AugmentedDataset(Dataset):\n \n def __init__(self, combined_list, transform=None):\n self.combined_list = combined_list\n \n def __len__(self):\n return len(self.combined_list)\n \n def __getitem__(self, idx):\n sample = self.combined_list[idx] \n return sample\n \naugmented_dataset = AugmentedDataset(list1)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "kavyajeetbora"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jasha"
}
] | false |
Masking optimizers for sparse input data? | null | [
{
"contents": "If I have a simple layer that has an option to ignore zeros in the input: <SCODE>import torch\nimport torch.nn as nn\nimport torch.optim as onn\nimport torch.autograd as ann\nimport torch.nn.functional as fnn\n\ntorch.manual_seed(123)\n\nclass SimpleLayer(nn.Module):\n\n def __init__(self, size_in, size_out, ignore_zero=True):\n super(SimpleLayer, self).__init__()\n self.weight = nn.Parameter(\n torch.randn(size_in, size_out) * 1e-5,\n requires_grad=True\n )\n self.ignore_zero = ignore_zero\n\n def forward(self, input_var):\n if self.ignore_zero:\n nz_inds = input_var.data.nonzero()[:, 1]\n return input_var[:, nz_inds].mm(self.weight[nz_inds])\n else:\n return input_var.mm(self.weight)\n<ECODE> And I create a training stack and some sparse data: <SCODE>layer = SimpleLayer(10, 5, ignore_zero=True)\nloss_func = fnn.smooth_l1_loss\noptimizer = onn.Adam(layer.parameters())\n\nsparse_input = torch.zeros(1, 10)\nsparse_input[0][2] = 0.2\nsparse_input[0][5] = 0.3\nsparse_input = ann.Variable(sparse_input)\n<ECODE> The output is identical whether ‘ignore_zero’ is set or not: <SCODE>layer.ignore_zero = True\nprint layer.forward((sparse_input))\n\nlayer.ignore_zero = False\nprint layer.forward((sparse_input))\n<ECODE> Outputs: <SCODE>Variable containing:\n1.00000e-06 *\n -2.2359 5.9174 3.7352 -3.4771 1.3588\n[torch.FloatTensor of size 1x5]\n\nVariable containing:\n1.00000e-06 *\n -2.2359 5.9174 3.7352 -3.4771 1.3588\n[torch.FloatTensor of size 1x5]\n<ECODE> On the other hand, results start to diverge after some training steps: <SCODE>layer.ignore_zero = False\nprint 'ignore_zero False:'\nfor i in range(5):\n outp = layer.forward((sparse_input))\n loss = fnn.smooth_l1_loss(outp, ann.Variable(torch.randn(1, 5)))\n loss.backward()\n optimizer.step()\n print loss.data[0]\n<ECODE> Gives: <SCODE>ignore_zero False:\n0.297815024853\n0.872213542461\n0.316926777363\n0.0565339252353\n0.746583342552\n<ECODE> <SCODE>layer.ignore_zero = True\nprint 'ignore_zero True:'\nfor i in range(5):\n outp = layer.forward((sparse_input))\n loss = fnn.smooth_l1_loss(outp, ann.Variable(torch.randn(1, 5)))\n loss.backward()\n optimizer.step()\n print loss.data[0]\n<ECODE> Gives: <SCODE>ignore_zero True:\n0.297815024853\n0.871760487556\n0.316960245371\n0.056279104203\n0.747062385082\n<ECODE> Is there some way to get the optimizer to agree with the masking step in the module’s forward pass? Thanks!",
"isAccepted": true,
"likes": null,
"poster": "cjmcmurtrie"
},
{
"contents": "<SCODE>class SimpleLayer(nn.Module):\n\n def __init__(self, size_in, size_out, ignore_zeros=True):\n super(SimpleLayer, self).__init__()\n self.weight = nn.Parameter(\n torch.randn(size_in, size_out) * 1e-5,\n requires_grad=True\n )\n self.ignore_zeros = ignore_zeros\n\n def forward(self, input_var):\n if self.ignore_zeros:\n nz_inds = ann.Variable(input_var.data.nonzero()[:, 1])\n inp_nz = input_var.index_select(1, nz_inds)\n weight_nz = self.weight.index_select(0, nz_inds)\n out = inp_nz.mm(weight_nz)\n return out\n else:\n return input_var.mm(self.weight)\n<ECODE>",
"isAccepted": true,
"likes": 3,
"poster": "cjmcmurtrie"
},
{
"contents": "Yes, we’re definitely planning to add it. Good to hear your problem is fixed in the newer version",
"isAccepted": true,
"likes": 2,
"poster": "apaszke"
}
] | true |
Cannot install PyTorch on Ubuntu14.04 via pip | null | [
{
"contents": "Hi, made a virtual environment via: virtualenv pytorch011 I then source it, and then run the command to install pytorch via pip with: pip install https://download.pytorch.org/whl/cu80/torch-0.1.10.post2-cp27-none-linux_x86_64.whl However I get this error: Not sure what to do here? Thanks",
"isAccepted": false,
"likes": null,
"poster": "Kalamaya"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "I figured as much, so I am in the process of updating python right now to 2.7.13…",
"isAccepted": false,
"likes": null,
"poster": "Kalamaya"
},
{
"contents": "Is sth wrong with my env?",
"isAccepted": false,
"likes": null,
"poster": "melgor"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "smth"
},
{
"contents": "I ended up just installed python 2.7.13 and now it works.",
"isAccepted": false,
"likes": null,
"poster": "Kalamaya"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Adel_Saleh1"
},
{
"contents": "This really works!!! Thanks.",
"isAccepted": false,
"likes": null,
"poster": "ginobilinie"
}
] | false |
ReLU in MNIST example | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "dima"
},
{
"contents": "I think it is not an unreasonable thing to to but it probably makes sense to remove it as it might speed up learning by a bit.",
"isAccepted": false,
"likes": null,
"poster": "pranav"
}
] | false |
How to declare a torch with unknown value in one dimension | null | [
{
"contents": "I have a list of sentences and I am convert the list to a 3d tensor. The first dimension represents number of sentences, second dimension represents number of words and third dimension represents word embedding size. The problem is, number of words can vary in sentences. I tried to create a 3d tensor as follows. <SCODE>all_sentences1 = torch.FloatTensor(len(instances), None, args.emsize)\n<ECODE> But this gives me the following error. <SCODE>TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, NoneType, int), but expected one of:\n * no arguments\n * (int ...)\n didn't match because some of the arguments have invalid types: (int, NoneType, int)\n * (torch.FloatTensor viewed_tensor)\n * (torch.Size size)\n * (torch.FloatStorage data)\n * (Sequence data)\n<ECODE> How can I declare a 3d tensor?",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "Edit",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "You need to understand that PyTorch works differently than static graph frameworks. You can’t define “placeholders” with unspecified sizes, but you can pass in tensors of different sizes to your model without any modifications. Instead of reusing a single placeholder, you always have to operate on real data.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
}
] | false |
CPU usage issue | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "xwgeng"
},
{
"contents": "And I attempt to install from source or binary, but no change. And the OS is CentOS Linux release 7.3.16.11",
"isAccepted": false,
"likes": null,
"poster": "xwgeng"
},
{
"contents": "What’s your PyTorch version? Also, did you try running e.g. with 4 OMP threads? I think the problem appears because when you’re using all the cores, they’re competing over cache space and can’t proceed as effectively.",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "My pytorch version is 0.1.10. And as you say, set the OMP_NUM_THREADS=4 and it works well. Thanks for your reply.",
"isAccepted": false,
"likes": null,
"poster": "xwgeng"
}
] | false |
Gradient computed on CPU but nor computed on GPU? | null | [
{
"contents": "I am writing some code for something similar to RoI pooling. The gradient propagates back well when I use CPU but not on GPU? Does anyone have any idea? Thanks a lot. A demo is like this. CPU: <SCODE>out = torch.zeros(1, 3, 6, 6)\nvout = Variable(out)\nfmap = np.arange(3 * 6 * 6).reshape((1, 3, 6, 6))\ntmap = Variable(torch.from_numpy(fmap).float(), requires_grad=True)\n\nmask = torch.zeros(1, 6, 6).byte()\nmask[0, 2:5, 2:5] = 1\nmask = Variable(mask.expand(1, 3, 6, 6))\nmasked = tmap.masked_select(mask).view(3, -1)\npooled = torch.max(masked, 1)[0][:, 0]\nvout[0, :, 0, 0] = pooled\n# similar to the operation above\nmask = torch.zeros(1, 6, 6).byte()\nmask[0, 3:6, 3:6] = 1\nmask = Variable(mask.expand(1, 3, 6, 6))\nmasked = tmap.masked_select(mask).view(3, -1)\npooled = torch.max(masked, 1)[0][:, 0]\nvout[0, :, 1, 1] = pooled\n\na = torch.mean(vout)\na.backward()\nprint tmap.grad\n<ECODE> GPU: <SCODE>out = torch.zeros(1, 3, 6, 6)\nvout = Variable(out).cuda()\nfmap = np.arange(3 * 6 * 6).reshape((1, 3, 6, 6))\ntmap = Variable(torch.from_numpy(fmap).float(), requires_grad=True).cuda()\n\nmask = torch.zeros(1, 6, 6).byte().cuda()\nmask[0, 2:5, 2:5] = 1\nmask = Variable(mask.expand(1, 3, 6, 6))\nmasked = tmap.masked_select(mask).view(3, -1)\npooled = torch.max(masked, 1)[0][:, 0]\nvout[0, :, 0, 0] = pooled\n\nmask = torch.zeros(1, 6, 6).byte().cuda()\nmask[0, 3:6, 3:6] = 1\nmask = Variable(mask.expand(1, 3, 6, 6))\nmasked = tmap.masked_select(mask).view(3, -1)\npooled = torch.max(masked, 1)[0][:, 0]\nvout[0, :, 1, 1] = pooled\n\na = torch.mean(vout)\na.backward()\nprint tmap.grad\n<ECODE> The result is None. I am using version 0.1.9.",
"isAccepted": false,
"likes": null,
"poster": "Donglai_Xiang"
},
{
"contents": "<SCODE>tmap_leaf = Variable(...)\ntmap = tmap_leaf.cuda()\n<ECODE> or better <SCODE>tmap = Variable(torch.from_numpy(...).float().cuda(), requires_grad=True)\n<ECODE>",
"isAccepted": false,
"likes": 3,
"poster": "apaszke"
},
{
"contents": "Oh, I get it. Thank you very much!",
"isAccepted": false,
"likes": null,
"poster": "Donglai_Xiang"
}
] | false |
How to apply a simple model’s parameters to a complicated model(two different model)) | null | [
{
"contents": "I have written a simple model(only the horizontal part of above model) and trained the parameters. Now, I want to apply the simple model’s parameters to the above model(a different one). The complicated model can’t inherit from the simple one(or use simple one as a part), So,is there any way to do this work?",
"isAccepted": false,
"likes": null,
"poster": "hongyuan"
},
{
"contents": "This will only work if each of the horizontal layers in your diagram are identical. If so then you can save the trained model parameters: <SCODE>torch.save(simple_model.state_dict(), \"path/to/file.pth\")\n<ECODE> and then load them into each of the horizontal layers: <SCODE>filepath = \"path/to/file.pth\"\n\ncomplex_model = [network() for i in range(num_layers)]\n\nfor layer in complex_model:\n layer.load_state_dict( torch.load( filepath ))\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Jordan_Campbell"
}
] | false |
What is the equivalent of Theano’s images2neibs | null | [
{
"contents": "In order to partition an image into smaller patches, during training, in Theano we can take advantage of theano.tensor.nnet.neighbours package that contains images2neibs, neibs2images. How can it be performed in PyTorch?",
"isAccepted": false,
"likes": null,
"poster": "Adriano_Pinto"
},
{
"contents": "No, I’m afraid we don’t have any operation like that. You can open a feature request in the main repository if you want.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "You can hack up a conv operation to do this. If you want a patch of say 8x8 just do a 8x8x64 convolution with zero padding and let the kernel be 1’s in different positions with all zeros. After you do this your 1x1x64 will be your 8x8 patches. This will probably be very inefficient with maximally sparse convolutions, but unless you intend to do this in an iterative manner it shouldn’t be noticeable. And should be faster than any loop you can cook up probably. With some extra effort you can make a better array programming solution, something involving reshapes and permutes probably.",
"isAccepted": false,
"likes": null,
"poster": "Veril"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "fmassa"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "pvskand"
}
] | false |
How to load caffe models in pytorch | null | [
{
"contents": "Thanks a lot!",
"isAccepted": false,
"likes": 1,
"poster": "kirk86"
},
{
"contents": "",
"isAccepted": false,
"likes": 5,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "kirk86"
},
{
"contents": "Not anytime soon I’m afraid. There are a lot of high priority tasks.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "But it should be quite simple for someone to add that. You just need to read protobufs and translate graphs into modules.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "gotcha ya. Thanks a lot for the suggestions.",
"isAccepted": false,
"likes": null,
"poster": "kirk86"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "xiaoxing_zeng"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "sunitha_kanuri"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Neo_li"
},
{
"contents": "Good evening, Following your advice apaszke, I downloaded loadcaffe, and transformed the caffe model + prototxt file into a model.t7 file. The model which gets created is: When I try to use torch.utils.serialization.load_lua(‘imdb.t7’) the result is that the .t7 model is corrupted. As such, I was hoping to get some guidance. Here is the error code: Thank you for your time! Similarly, Neo_li’s solution is not working.",
"isAccepted": false,
"likes": null,
"poster": "aadharna"
}
] | false |
nn.DataParallel(model).cuda() hangs | null | [
{
"contents": "Hi, <SCODE>Traceback (most recent call last):\n File \"/home/polphit/anaconda3/lib/python3.6/multiprocessing/process.py\", line 249, in _bootstrap\n self.run()\n File \"/home/polphit/anaconda3/lib/python3.6/multiprocessing/process.py\", line 93, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/polphit/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 28, in _worker_loop\n r = index_queue.get()\n File \"/home/polphit/anaconda3/lib/python3.6/multiprocessing/queues.py\", line 343, in get\n res = self._reader.recv_bytes()\n File \"/home/polphit/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 216, in recv_bytes\n buf = self._recv_bytes(maxlength)\n File \"/home/polphit/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 407, in _recv_bytes\n buf = self._recv(4)\n File \"/home/polphit/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 379, in _recv\n chunk = read(handle, remaining)\nKeyboardInterrupt\n<ECODE> I tried on two different machines and got the same issue. Ubuntu 16.04, conda 4.3.14, pytorch installed from source, python 3.6.0.final.0 requests 2.12.4 CUDA 8.0 cuDNN 5.1 When I run the same code on a machine without conda, and python3, it works well.",
"isAccepted": false,
"likes": 1,
"poster": "thnkim"
},
{
"contents": "That’s a stack trace of a data loader process, can you paste a full error into a gist and link it here?",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "(I have two GPU devices) Flow is <SCODE>i (input) -> netA ---> netB -> x (output #1)\n +-> netC -> y (output #2)\n +-> netD -> z (output #3)\n<ECODE> If this is not helpful to guess the cause, I would like to simplify my codes to reproduce the issue with minimal data upload.",
"isAccepted": false,
"likes": null,
"poster": "thnkim"
},
{
"contents": "Oh, when I add torch.cuda.synchronize() at the end of a batch, one machine works properly, although the other machine still has the same issue.",
"isAccepted": false,
"likes": null,
"poster": "thnkim"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "Unfortunately even if we add these locs, doing that in two processes that use the same GPUs in DataParallel will deadlock too…",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "so…is it a bug of pytorch? I met the same issue.",
"isAccepted": false,
"likes": null,
"poster": "melody-rain"
},
{
"contents": "No, it’s a bug in NCCL (NVIDIA’s library). But you probably shouldn’t be using the same GPU in multiple data parallel jobs anyway.",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "octopusyo"
},
{
"contents": "Please have a look and see if it applies to you.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Mohit_Chhabra"
},
{
"contents": "It was some hardware issue, although the p2platency test passed, when i changed the CUDA_VISIBLE_DEVICES , and ran the code without DataParallel it gave illegal memory access error for the faulty GPU. Changed the GPU and now the code works fine.",
"isAccepted": false,
"likes": null,
"poster": "Mohit_Chhabra"
}
] | false |
Assertion failure when using Conv1d | null | [
{
"contents": "Am I doing something wrong or is this a bug? The dimension of the input is [batchsize, channels, data] < 4, and this fits with the documentation if I can read the equation right. There is a previous post here [which I can’t link because newbie link limit] that mentions the same error for conv2d without batch_size, and a possible hack around it with unsqueeze, but this shouldn’t be necessary for conv1d with batch_size.",
"isAccepted": false,
"likes": null,
"poster": "Orpheon"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
How can I do multiply between two 4-dimension matrix? | null | [
{
"contents": "So how should I do? THANKS",
"isAccepted": false,
"likes": null,
"poster": "xiaoxing_zeng"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "albanD"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "xiaoxing_zeng"
}
] | false |
How to normalize embedding vectors? | null | [
{
"contents": "Hi, I am using a network to embed some entity into vector space. As the length of the vector decrease during the training. I want to normalize it’s length to 1 in the end of each step. Is there any tool that I can use to normalize the embedding vectors?",
"isAccepted": true,
"likes": 13,
"poster": "maplewizard"
},
{
"contents": "I think the best thing you can do is to save the embedded indices, and normalize their rows manually after the update (just index_select them, compute row-wise norm, divice, index_copy back into weights). We only support automatic max norm clipping.",
"isAccepted": true,
"likes": 2,
"poster": "apaszke"
},
{
"contents": "If you want to normalize a vector as a part of a model, this should do it: assume q is the tensor to be L2 normalized, along dim 1",
"isAccepted": true,
"likes": 18,
"poster": "samarth-robo"
},
{
"contents": "Can I use it to normalise the embedding after each update in the training ?",
"isAccepted": true,
"likes": 1,
"poster": "asdass"
},
{
"contents": "",
"isAccepted": true,
"likes": 1,
"poster": "samarth-robo"
},
{
"contents": "I see, thanks. But I need to do the constraints on the norms of embeddings( not bigger than 1 ). Do you have any better suggestions to do it?",
"isAccepted": true,
"likes": null,
"poster": "asdass"
},
{
"contents": "<SCODE># suppose x is a Variable of size [4, 16], 4 is batch_size, 16 is feature dimension\nx = Variable(torch.rand(4, 16), requires_grad=True)\nnorm = x.norm(p=2, dim=1, keepdim=True)\nx_normalized = x.div(norm.expand_as(x))\n<ECODE>",
"isAccepted": true,
"likes": 4,
"poster": "jdhao"
},
{
"contents": "<SCODE>norm = x.norm(p=2, dim=1, keepdim=True)\nx_normalized = x.div(norm)\n<ECODE>",
"isAccepted": true,
"likes": 4,
"poster": "moi90"
},
{
"contents": "<SCODE>import torch.nn.functional as F\nx = F.normalize(x, p=2, dim=1)\n<ECODE>",
"isAccepted": true,
"likes": 29,
"poster": "jdhao"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "Liang"
},
{
"contents": "Then in this case I get the following error for any of the above options I try: TypeError: cannot assign ‘torch.autograd.variable.Variable’ as parameter ‘W’ (torch.nn.Parameter or None expected) <SCODE>class myUnit(nn.Module):\n def __init__(self,myParameter):\n super(myUnit, self).__init__()\n self.myParameter = Parameter(myParameter,requires_grad=True)\n def forward(self,input):\n \"\"\"\n Whatever operation. Just an example:\n \"\"\"\n self.myParameter = F.normalize(self.myParameter,p=2,dim=1)\n output = self.myParameter * input - 1\n return output<ECODE>",
"isAccepted": true,
"likes": null,
"poster": "sssohrab"
},
{
"contents": "May be you can try: <SCODE>self.myParameter.weight.data = F.normalize(self.myParameter.weight.data, p=2, dim=1)\n<ECODE>",
"isAccepted": true,
"likes": 1,
"poster": "Guohai93"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "Ritwick_Chaudhry"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "f10w"
},
{
"contents": "Your issue seems to be unrelated to the linked post, as your error points towards an error running a DDP model.",
"isAccepted": true,
"likes": null,
"poster": "ptrblck"
},
{
"contents": "Sorry I copied the wrong error message. The above message was for the following code: <SCODE>self.embeddings = torch.nn.Parameter(F.normalize(self.embeddings, p=2, dim=-1))\n<ECODE> What I would like to do is to constraint the weights of my model to always lie on a sphere. And I finally found the solution: <SCODE>self.embeddings.data.copy_(torch.nn.Parameter(F.normalize(self.embeddings.data, p=2, dim=-1)))\n<ECODE> Hope that this will be useful for future readers.",
"isAccepted": true,
"likes": null,
"poster": "f10w"
}
] | true |
How to experiment on test_torch.py? | null | [
{
"contents": "Hi Everyone, Basically, I want to run each test_function on each tensor operation individually, so I can see how the operation work. How I tried, but failed: <SCODE># inside jupyter notebook\n%cd /path to /pytorch-master/test\n\nimport torch\nimport test_torch\ntest1 = test_torch.TestTorch(methodName='runTest')\ntest1.test_dot()\n<ECODE> <SCODE>def test_dot(self):\n types = {\n 'torch.DoubleTensor': 1e-8,\n 'torch.FloatTensor': 1e-4,\n }\n for tname, prec in types.items():\n v1 = torch.randn(100).type(tname)\n v2 = torch.randn(100).type(tname)\n res1 = torch.dot(v1, v2)\n print(res1)\n res2 = 0\n for i, j in zip(v1, v2):\n res2 += i * j\n print(res2)\n self.assertEqual(res1, res2)\n<ECODE> Could anyone help me find a way to experiment on each test function individually? Thanks a lot! Daniel",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
}
] | false |
Question about pack_padded_sequence | null | [
{
"contents": "Result: Result: Thanks!",
"isAccepted": false,
"likes": null,
"poster": "VladislavPrh"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "Why is this commented here & not in the next snippet?",
"isAccepted": false,
"likes": null,
"poster": "pranav"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "VladislavPrh"
}
] | false |
Converting Image to Tensor Error | vision | [
{
"contents": "I am getting this error “RuntimeError: can’t convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8.” even though the ndarray is an array of 249 ndarrays of shape (512x384x3) with dtype=uint8. One interesting thing is np.array(images).shape returns (249,) not the entire shape I am expecting. The images are of varying size so I read in each one and create a thumbnail of standard sizing using PIL thumbnail method. Then i create an ndarray and add it to the images list. From there is try and create a Tensor. <SCODE>def read_image_files(path):\nsize = 512, 512\n\n# Read in image using imread\nimages = []\nfor typePath in tqdm_notebook(glob(path + \"/*\"), desc='Type'):\n count = 0\n for imgPath in tqdm_notebook(glob(typePath + \"/*\"), desc='Image'):\n img = Image.open(imgPath)\n img.thumbnail(size)\n images.append(np.array(img))\n\nreturn torch.from_numpy(np.array(images))<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "wwoodal1"
},
{
"contents": "We dont support ndarray of ndarrays, which is why you are seeing this error. torch itself does not support a Tensor containing many variable length Tensors, and we do not plan to support it.",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
Pytorch call the torch function | vision | [
{
"contents": "Btw, is there any way to use pytorch to call the torch layer? Best,",
"isAccepted": false,
"likes": null,
"poster": "Zhang_He"
},
{
"contents": "there isn’t a PyTorch way to call a torch layer.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Zhang_He"
}
] | false |
DataParallel, data may have different sizes | null | [
{
"contents": "Currently, the DataParallel forces the inputs having same size. But is it possible to support data with different sizes? I am using PyTorch to implement my detection and segmentation frameworks (computer vision). For example, in each image, the number of objects are different. My current solution is to add some dummy to make the annotations of different images in the same batch have same dimensions. I think this works bounding boxes but not a good solution for segmentation masks. I would like to hear the better solution for this kind of problems.",
"isAccepted": false,
"likes": null,
"poster": "chengyangfu"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "apaszke"
}
] | false |
DataLoader vs convolution tensor shape | null | [
{
"contents": "<SCODE>from torchvision.datasets import ImageFolder\nfrom torchvision.transforms import ToTensor\ndata = ImageFolder(root='PytorchTestImgDir', transform=ToTensor()) \nprint(data.classes) \nfrom torch.utils.data import DataLoader\ntrain_loader = DataLoader(data, batch_size=1)\n<ECODE> but the result is a: RuntimeError: Need input of dimension 4 and input.size[1] == 1 but got input to be of shape: [1 x 3 x 138 x 138] at /Users/soumith/anaconda/conda-bld/pytorch-0.1.10_1488750409207/work/torch/lib/THNN/generic/SpatialConvolutionMM.c:47",
"isAccepted": false,
"likes": null,
"poster": "rracinskij"
},
{
"contents": "Looks like the MNIST example’s model expects one color channel (MNIST is black and white) but the images you’re providing are being loaded with three (RGB) channels.",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "My blunder, thanks a lot for figuring it out!",
"isAccepted": false,
"likes": null,
"poster": "rracinskij"
},
{
"contents": "nst as bx or not, no right wrong",
"isAccepted": false,
"likes": null,
"poster": "Lee_Jim"
}
] | false |
How to transform classification into regression? | null | [
{
"contents": "Well here’s a pretty simple problem, how do you go from a loss = nn.CrossEntropyLoss(output, labels) y_pred = torch.normal( mu, sigma2.sqrt() ) and loss = F.smooth_l1_loss(y_pred, labels) Here’s the relevant bit of my code, <SCODE> def forward(self, x):\n # Set initial states\n h0 = Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size)) # 2 for bidirection \n c0 = Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size))\n \n # Forward propagate RNN\n out, _ = self.lstm(x, (h0, c0))\n \n # Decode hidden state of last time step\n mu = self.mu( out[:, -1, :] )\n sigma2 = self.sigma2( out[:, -1, :] )\n return mu, sigma2\n\nrnn = BiRNN(input_size, hidden_size, num_layers, num_classes)\n\n# Loss and Optimizer\noptimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)\n \n# Train the Model \nfor epoch in range(num_epochs):\n for i, (images, labels) in enumerate(train_loader):\n images = Variable(images.view(-1, sequence_length, input_size))\n labels = Variable( labels.float() ) \n # Forward + Backward + Optimize\n optimizer.zero_grad()\n #outputs = rnn(images)\n mu, sigma2 = rnn(images)\n sigma2 = (1 + sigma2.exp()).log() # ensure positivity\n y_pred = torch.normal( mu, sigma2.sqrt() ) \n y_pred = y_pred.float()\n #y_pred = Variable( torch.normal(mu, sigma2.sqrt()).data.float() ) \n loss = F.smooth_l1_loss( y_pred , labels ) \n loss.backward()\n optimizer.step()\n\n<ECODE> and the compile error, <SCODE> File \"main_v1.py\", line 90, in <module>\n loss.backward()\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/variable.py\", line 158, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/stochastic_function.py\", line 13, in _do_backward\n raise RuntimeError(\"differentiating stochastic functions requires \"\nRuntimeError: differentiating stochastic functions requires providing a reward\n<ECODE> It’s modified from yunjey/pytorch-tutorial/blob/master/tutorials/07%20-%20Bidirectional%20Recurrent%20Neural%20Network/main.py OR, perhaps I’m making it more complicated than it needs to be with the Gaussian thing? Should I just stick an encoder on the output of the LSTM ???",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "Cheers",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "It’s nothing to do with PyTorch, I just haven’t thought carefully enough about what I’m actually trying to do here - I was carrying over an idea from continuous action reinforcement learning, and it doesn’t seem to make sense in the context of regression?",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
}
] | false |
Updating the parameters of a few nodes in a pre-trained network during training | vision | [
{
"contents": "Hi, I am really new to PyTorch and was wondering if there is a way to specify only a subset of neurons (of a particular layer) to update during training and freeze the rest. Say, update only 2500 nodes of the 4096 in AlexNet, FC7. param.requires_grad seems to apply to all the neurons. Appreciate your inputs.",
"isAccepted": false,
"likes": null,
"poster": "metro.smiles"
},
{
"contents": "I add simple codes. If I understand your question exactly, It will be helpful <SCODE>D_parameters = [\n {'params': model.fc1.parameters()},\n {'params': model.fc2.parameters()}\n] # define a part of parameters in model\n\noptimizerD = torch.optim.Adam(D_parameters, lr=learning_rate)\n\nfor epoch in range(training_epoch):\n ....\n optimizerD.zero_grad() # zero_grad only D_parameters\n # do something \n loss.backward() # calculate grads of all\n optimizerD.step() # update only D_parameters\n\n<ECODE>",
"isAccepted": false,
"likes": 5,
"poster": "jhjungCode"
},
{
"contents": "Hi, Appreciate your prompt response but this is not exactly what I am looking for. I want to update only a subset of the fc1 parameters, for example. If we update D_parameters, in your example, then all weights from the 4096 nodes of FC1 get updated. What I want is to update a subset of it, say 2900 of them (that I have as a list).",
"isAccepted": false,
"likes": null,
"poster": "metro.smiles"
},
{
"contents": "I think you should reconstruct trained model, and then apply updated a few nodes. <SCODE>import torch\nimport torch.nn as nn\nfrom torchvision import models\n\noriginal_model = models.alexnet(pretrained=True)\n\nclass AlexNetConv4(nn.Module):\n def __init__(self):\n super(AlexNetConv4, self).__init__()\n self.features = nn.Sequential(\n # stop at conv4\n *list(original_model.features.children())[:-3]\n )\n self.fc1 = nn.Linear(2900, n_outpout, bias = True)\n self.fc2 = nn.Linear(4096-2900, n_outpout, bias = True)\n\n def forward(self, x):\n x = self.features(x)\n x1 = self.fc1(x[:,:2900])\n x2 = self.fc2(x[:,2900:])\n ....\n return x\n\nmodel = AlexNetConv4()\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "jhjungCode"
},
{
"contents": "you could use a backward hook on the output on fc2 to zero the gradients going through parts that you want to filter. For example: <SCODE>m = nn.Linear(1024, 4096)\ninput = Variable(torch.randn(128, 1024), requires_grad=True)\n\nout = m(input)\ndef my_hook(grad):\n grad_clone = grad.clone()\n grad_clone[:, 2500:] = 0\n return grad_clone\nh = out.register_hook(my_hook) # zeroes the gradients wrt the outputs for everything that's not 0 to 2500 over all mini-batches\n\nout.backward(grads)\n<ECODE>",
"isAccepted": false,
"likes": 5,
"poster": "smth"
},
{
"contents": "Good!! I knew new practical usages of hooking by your answer. Thank you",
"isAccepted": false,
"likes": null,
"poster": "jhjungCode"
},
{
"contents": "<SCODE>def my_hook(grad):\n grad_clone = grad.clone()\n grad_clone[:, 2500:] = 0\n return grad_clone\n<ECODE>",
"isAccepted": false,
"likes": 2,
"poster": "fmassa"
},
{
"contents": "<SCODE>m = nn.Linear(1024, 4096)\ninput = Variable(torch.randn(128, 1024), requires_grad=True)\n\nout = m(input)\nh = out.register_hook(my_hook(grad)) \n\nout.backward(grads)\n\ndef my_hook(grad):\n grad_clone = grad.clone()\n grad_clone[:, 2500:] = 0\n<ECODE> Like this?",
"isAccepted": false,
"likes": null,
"poster": "dlmacedo"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "fmassa"
},
{
"contents": "For sure. Thanks, David",
"isAccepted": false,
"likes": null,
"poster": "dlmacedo"
},
{
"contents": "Hey,",
"isAccepted": false,
"likes": null,
"poster": "Luiza_Sayfullina"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "WERush"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Rahul_Patel"
}
] | false |
Can we use pre-trained word embeddings for weight initialization in nn.Embedding? | null | [
{
"contents": "I want to use pre-trained word embeddings as initial weight vectors for embedding layer in a encoder model? How can I achieve this? For example, in Keras, we can pass a weight matrix as parameter to the embedding layer. Is there any similar way in PyTorch to do this?",
"isAccepted": false,
"likes": 7,
"poster": "wasiahmad"
},
{
"contents": "you can just assign the weight to the embedding layer. Like: <SCODE>embed = nn.Embedding(num_embeddings, embedding_dim)\n# pretrained_weight is a numpy matrix of shape (num_embeddings, embedding_dim)\nembed.weight.data.copy_(torch.from_numpy(pretrained_weight))<ECODE>",
"isAccepted": false,
"likes": 23,
"poster": "ruotianluo"
},
{
"contents": "I usually use the following way, which is better? <SCODE>#embeddings is a torch tensor.\nembedding = nn.Embedding(embeddings.size(0), embeddings.size(1))\nembedding.weight = nn.Parameter(embeddings)\n<ECODE>",
"isAccepted": false,
"likes": 8,
"poster": "ShawnGuo"
},
{
"contents": "I’d say they’re both ok.",
"isAccepted": false,
"likes": 5,
"poster": "apaszke"
},
{
"contents": "And how can we keep the embedding matrix fixed during training? I didn’t find that in the doc.",
"isAccepted": false,
"likes": 3,
"poster": "xinyadu"
},
{
"contents": "<SCODE>embed.weight.requires_grad = False\n<ECODE> to freeze the parameter of it.",
"isAccepted": false,
"likes": 18,
"poster": "cyyyyc123"
},
{
"contents": "But when I initialize optimizer, i got “ValueError: optimizing a parameter that doesn’t require gradients”",
"isAccepted": false,
"likes": 2,
"poster": "xinyadu"
},
{
"contents": "",
"isAccepted": false,
"likes": 12,
"poster": "xwgeng"
},
{
"contents": "How can we specifically use glove vectors and mainly in the encoder - decoder model ? Not able to understand.",
"isAccepted": false,
"likes": 1,
"poster": "Navneet_M_Kumar"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "tushargupta14"
},
{
"contents": "What if I want to use sentence embedding as a whole and not word vectors. Suppose I have the sentence embeddings ready, do I create a (number of sentences X sentence embedding dimension) matrix and map it to each sentence id? And pass this matrix to the embedding layer and call the sentence ids in the forward function? Is this approach right? I’m trying to perform a type of sentence classification.",
"isAccepted": false,
"likes": null,
"poster": "sritvik"
},
{
"contents": "Regarding the use of pre-trained word embeddings, the indexes of words that are passed to the Embedding layer should be equivalent to their index in the pre-trained embedding (the numpy matrix), right?",
"isAccepted": false,
"likes": null,
"poster": "mfa"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "shirdu"
},
{
"contents": "Yes, it’s a matrix. Each row is the embedding for a word. You’ll also want a dictionary: mapping of words (strings) to integers 0, 1, …, N",
"isAccepted": false,
"likes": null,
"poster": "colesbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "shirdu"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "shirdu"
},
{
"contents": "Yes, the indexes of the words in your vocabulary should be same as your indexes in your embedding numpy matrix. This helps the embedding layer to map the word tokens to vectors.",
"isAccepted": false,
"likes": null,
"poster": "tushargupta14"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "austin"
},
{
"contents": "Sorry for the bump - what might be the easiest way to have only parts of the embedding matrix frozen? For example, if I wanted to use pre-trained embeddings but for certain words assign a special custom token whose embedding I want to train?",
"isAccepted": false,
"likes": null,
"poster": "sidedishes"
},
{
"contents": "Two ways I’d see doing it: First to have two separate embeddings. One embedding learns, the other uses pre-trained weights. Select the embedding to use depending on the value of the input. The other approach would be to overwrite the pretrained parts of the embedding at the beginning of each batch to undo the results of the previous optimizer step.",
"isAccepted": false,
"likes": null,
"poster": "jlquinn"
}
] | false |
How to store state_dict in memory and keep it unlinked to the state of the net? | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "tunante"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jhjungCode"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "tunante"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "smth"
}
] | false |
attributeError on `_rebuild_tensor` caused conda install avoided by pip install | null | [
{
"contents": "<SCODE> \", \".join([t for t in dir(torch._utils)])\n ### inside _utils we have the following function, but why not shown up???\n\n # def _rebuild_tensor(storage, storage_offset, size, stride):\n # class_name = storage.__class__.__name__.replace('Storage', 'Tensor')\n # module = importlib.import_module(storage.__module__)\n # tensor_class = getattr(module, class_name)\n # return tensor_class().set_(storage, storage_offset, size, stride)\n\n# output is the following\n# '__builtins__, __cached__, __doc__, __file__, __loader__, __name__, __package__, __spec__, \n# _accumulate, _cuda, _import_dotted_name, _range, _type, torch'\n<ECODE> <SCODE>\", \".join([t for t in dir(torch.tensor)])\n\n# output is the following\n# '_TensorBase, __builtins__, __cached__, __doc__, __file__, __loader__, __name__, __package__, \n# __spec__, _cuda, _range, _tensor_str, _type, sys, torch'\n<ECODE> Guesses <SCODE>ERROR: test_serialization_map_location (__main__.TestTorch)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"test_torch.py\", line 2711, in test_serialization_map_location\n tensor = torch.load(test_file_path, map_location=map_location)\n File \"/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py\", line 222, in load\n return _load(f, map_location, pickle_module)\n File \"/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py\", line 370, in _load\n result = unpickler.load()\nAttributeError: Can't get attribute '_rebuild_tensor' on <module 'torch._utils' from '/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/_utils.py'>\n<ECODE> Solution worked this time <SCODE>Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x114c5da60>\nTraceback (most recent call last):\n File \"/Users/Natsume/miniconda2/envs/pytorch-experiment/lib/python3.5/weakref.py\", line 117, in remove\nTypeError: 'NoneType' object is not callable\n<ECODE> should I be worried about this one? Thanks a lot!",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
},
{
"contents": "I met the same problem. The reason may be the version of the pretrained model you load doesn’t match with your pytorch.",
"isAccepted": false,
"likes": null,
"poster": "cslxiao"
}
] | false |
How to show a image in jupyter notebook with pytorch easily? | null | [
{
"contents": "like in itorch, I can use itorch.image to show a image in the notebook.",
"isAccepted": false,
"likes": 2,
"poster": "richardhahahaha"
},
{
"contents": "Yeah this frustrated me a lot too because it’s so easy with itorch… You have a few options with python but there’s not a stand-alone command that I’m aware of for displaying an image from a PyTorch Tensor. If you convert to a PIL image then you can just execute the Image variable in a cell and it will display the image. To load to PIL: <SCODE>img = Image.open('path-to-image-file').convert('RGB')\n<ECODE> Or to convert straight from a PyTorch Tensor: <SCODE>to_pil = torchvision.transforms.ToPILImage()\nimg = to_pil(your-tensor)\n\n<ECODE>",
"isAccepted": false,
"likes": 4,
"poster": "amdegroot"
},
{
"contents": "this is how I display an image: <SCODE>import torch as t\nfrom torchvision.transforms import ToPILImage\nfrom IPython.display import Image\nto_img = ToPILImage()\n\n# display tensor\na = t.Tensor(3, 64, 64).normal_()\nto_img(a)\n\n# display imagefile\nImage('/path/to/my.png')\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "chenyuntc"
},
{
"contents": "you can look at my notebook here, but yea seems like no standard solution. Basically, I do: <SCODE>%matplotlib inline\ndef show(img):\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest')\n<ECODE>",
"isAccepted": false,
"likes": 12,
"poster": "smth"
},
{
"contents": "Hi Soumith, I would like to know how I can use your solution, if I am using iterator like this <SCODE>dataiter = iter(trainloader)\nimages, labels = dataiter.next()\n<ECODE> Thank you in advance.",
"isAccepted": false,
"likes": 2,
"poster": "oya163"
},
{
"contents": "I have same question…",
"isAccepted": false,
"likes": null,
"poster": "farrokhi"
},
{
"contents": "<SCODE> display(Image(image_path))\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "Vimos"
},
{
"contents": "This is what I did, hope can be helpful for someone. <SCODE>from PIL import Image\n`image = Image.open('img_path.jpg').convert('RGB')\n plt.imshow(np.transpose(np.asarray(content.squeeze()),(1,2,0)))`\n\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "talha3111997"
},
{
"contents": "What I did: transfer the tensor to numpy array, reshape(if needed), and use plt.imshow(): <SCODE>import matplotlib.pyplot as plt\nimg_np_arr = img_tensor.numpy() # transfer the pytorch tensor(img_tensor) to numpy array\nimg_np_arr.shape # check shape before reshape if needed\nimg_np_arr_reshaped = img_np_arr.reshape(img_w, img_h) # reshape to 2-dims image\nplt.imshow(img_np_arr_reshaped, cmap='gray') # when display the grayscale image\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "xiaoyanzhuo"
},
{
"contents": "This worked flawlessly for displaying an image from CIFAR-100 dataset. Thank you!",
"isAccepted": false,
"likes": null,
"poster": "ramapinnimty"
}
] | false |
How to index a two-dimension tensor with two single-dimension tensors | null | [
{
"contents": "I want to index a two-dimension tensor like I do in numpy. <SCODE>TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.\n<ECODE> How do I index a two-dimension tensor using other two tensors?",
"isAccepted": false,
"likes": 2,
"poster": "Coldmooon"
},
{
"contents": "a[b, :][:, b] We plan to tackle this soon.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "Thanks for your reply. But this still gives the same error: <SCODE>>>> a[b,:][:,b]\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.\n<ECODE> My pytorch vision is: <SCODE>$ conda list | grep torch \npytorch 0.1.10 py35_1cu80 [cuda80] soumith\ntorchvision 0.1.6 py35_19 soumith\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Coldmooon"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "b64406620"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "YankuHorn"
},
{
"contents": "x[b].transpose(1, 0)[b].transpose(1, 0) should do the trick. Which seems self-contradictory",
"isAccepted": false,
"likes": null,
"poster": "dblythe"
},
{
"contents": "Can also try adding something like: <SCODE>_oldgetitem = torch.FloatTensor.__getitem__\n\n\ndef _getitem(self, slice_):\n if type(slice_) is tuple and torch.LongTensor in [type(x) for x in slice_]:\n i = [j for j, ix in enumerate(slice_)\n if type(ix) == torch.LongTensor][0]\n return self.transpose(0, i)[slice_[i]].transpose(i, 0)\n else:\n return _oldgetitem(self, slice_ )\n\n\ntorch.FloatTensor.__getitem__ = _getitem\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "dblythe"
}
] | false |
Run two different jobs in parallel on same GPUs, I got my GPU locked up | null | [
{
"contents": "firstly I’ve training on a regular python file using 4 GPUs using function Dataparallel. Then I loaded saved parameters using ipython notebook through SSH while the previous job is still running. when I load it on a single GPU instead of Dataparallel, it shows that weight doesn’t exist.So instead I use Dataparallel function on the same GPUs just like training process, then the problem occurred. The ipython froze, and I immediately kill the job. Then My GPU is locked up like the picture shows. I can restart any job but it only shows 1MB memory whatever I tried. I’ve ran into the same problem before, reboot can do but my peers are also using this remote server. What can I do ;-(, I searched through ‘ps -ef’ but still cannot find relevant jobs that caused the problem. What have I done ;-(.",
"isAccepted": false,
"likes": null,
"poster": "hbkunn"
},
{
"contents": "So the problem is because the NVIDIA libraries we’re using for inter-GPU communication in DataParallel do some funky stuff and they can leave the driver in some inconsistent state. Just remember to never launch multiple DataParallel jobs that share some of the GPUs (it’s ok to run one job on GPU 0, 1 and anoher on 2, 3).",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "I have rebooted my server, but still some strange errors occur occasionally like ‘RuntimeError: cuda runtime error (4) : unspecified launch failure at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorCopy.c:18’. And when I ran ‘nvidia-smi’ sometimes it’s extremely slow…",
"isAccepted": false,
"likes": null,
"poster": "hbkunn"
},
{
"contents": "e… it seems like a serious problem. unspecified launch failure constantly occurs about 1 or 2 hours after I launch my code. I can’t find any solution related. Should I reinstall my NVIDIA driver…?",
"isAccepted": false,
"likes": null,
"poster": "hbkunn"
},
{
"contents": "is your GPU becoming too hot? occurs after 1 to 2 hours of launching your code sounds like that might be a problem? (because Unspecified Launch Failure might sometimes be that)",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "maybe try a complete power cycle? shut down the machine for a few seconds?",
"isAccepted": false,
"likes": null,
"poster": "yzhu"
},
{
"contents": "Not sure why but this comment was addressed to me? I received an email notification.",
"isAccepted": false,
"likes": null,
"poster": "cjmcmurtrie"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "hbkunn"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "hbkunn"
}
] | false |
Is it possible to make a function transparent between cpu and gpu? | null | [
{
"contents": "I defined a function in which there’re temporary variables are defined, for example: <SCODE>def foo(input_variable): \n tmp = Variable(torch.zeros(input_variable.size())) \n return torch.cat((input_variable, tmp), 1) \n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "david-leon"
},
{
"contents": "by the way, you can use below to format your code it will be formated like this <SCODE>def foo(input_variable): \n tmp = Variable(torch.zeros(input_variable.size())) \n return torch.cat((input_variable, tmp), 1)\n<ECODE> it’s markdown sytanx",
"isAccepted": false,
"likes": null,
"poster": "chenyuntc"
},
{
"contents": "Thanks, this really helps!",
"isAccepted": false,
"likes": null,
"poster": "david-leon"
},
{
"contents": "The following works when X is on CPU but fails to if its not: <SCODE>>>> X = Variable(torch.zeros(2, 2))\n>>> torch.Tensor.new(X.data, torch.zeros(2, 2))\n\n 0 0\n 0 0\n[torch.FloatTensor of size 2x2]\n\n>>> X = Variable(torch.zeros(2, 2)).cuda()\n>>> torch.Tensor.new(X.data, torch.zeros(2, 2))\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-5-44093d4f4ab5> in <module>()\n----> 1 torch.Tensor.new(X.data, torch.zeros(2, 2))\n\nTypeError: unbound method new() must be called with FloatTensor instance as first argument (got FloatTensor instance instead)\n\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "pranav"
},
{
"contents": "Hi,",
"isAccepted": false,
"likes": 2,
"poster": "albanD"
},
{
"contents": "For batch size = 128, on my model way 1 runs 3 times slower than way 2 & 3 (~4s vs ~1.6s). Way3 is slightly faster than way 2. Is this supposed to be or am I doing wrong? Besides, is there any way to get the device a tensor currently resides on?",
"isAccepted": false,
"likes": null,
"poster": "david-leon"
},
{
"contents": "what you want to do is: noise = x.data.new(x.size()).normal_() That will be the fastest.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "pranav"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "david-leon"
},
{
"contents": "do <SCODE>noise =x.data.new(*x.size()).normal_()\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "chenyuntc"
}
] | false |
What are items inside `torch.__builtins__` for? | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
},
{
"contents": "these are python built-in objects / members.",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "Given they are called builtin functions or classes or types, please correct me if I am wrong: <SCODE>any, __name__, StopAsyncIteration, RuntimeWarning, BufferError, NotImplementedError, UnboundLocalError, repr, UnicodeTranslateError, UnicodeWarning, max, IndentationError, BytesWarning, None, IsADirectoryError, __IPYTHON__, AttributeError, reversed, ConnectionError, BaseException, issubclass, ConnectionResetError, range, staticmethod, PendingDeprecationWarning, __doc__, AssertionError, GeneratorExit, chr, abs, RecursionError, MemoryError, __package__, frozenset, str, enumerate, ImportError, dreload, bytes, float, map, TabError, True, ZeroDivisionError, False, memoryview, super, type, filter, SystemError, getattr, callable, __build_class__, setattr, sum, round, ConnectionAbortedError, OSError, ValueError, dict, DeprecationWarning, RuntimeError, TypeError, pow, hasattr, exec, zip, ArithmeticError, help, LookupError, UserWarning, PermissionError, print, ReferenceError, SystemExit, IndexError, complex, all, license, UnicodeError, input, vars, open, KeyboardInterrupt, bytearray, StopIteration, ConnectionRefusedError, __import__, SyntaxError, KeyError, UnicodeEncodeError, dir, ProcessLookupError, sorted, FutureWarning, ascii, ChildProcessError, next, IOError, EnvironmentError, format, id, __spec__, ImportWarning, divmod, TimeoutError, EOFError, Ellipsis, OverflowError, object, eval, hash, locals, InterruptedError, delattr, int, FloatingPointError, credits, Exception, min, BlockingIOError, __loader__, globals, NotADirectoryError, NameError, slice, get_ipython, classmethod, UnicodeDecodeError, copyright, compile, bool, isinstance, ord, __debug__, len, hex, ResourceWarning, BrokenPipeError, Warning, tuple, oct, iter, FileNotFoundError, NotImplemented, set, property, SyntaxWarning, FileExistsError, bin, list\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "dl4daniel"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
}
] | false |
Initialize different convolution layers of a network with different method | null | [
{
"contents": "I want to initialize different convolution layers of a network with different methods. Is there any way to do this thing? Please help me.",
"isAccepted": false,
"likes": null,
"poster": "Soniya"
},
{
"contents": "I found a course on Deep Learning from Chinese University of Hong Kong useful. They have a tutorial on PyTorch that shows how we can initialize different layers. \nHere 83 is the link to the course. \nHere 77 is the link to the slides for the tutorial. \nHere 63 is the link to the code that is discussed during the tutorial. \nHere 11 is the link to the audio for the class session. Slide number 14 talks about how to initialize parameters. In the audio at 36 minutes in, the instructor talks about this slide. I think this will help.",
"isAccepted": false,
"likes": 4,
"poster": "shaun"
}
] | false |
The training speed of between GPU and CPU is same? | null | [
{
"contents": "When I using Pytorch for conditional gan , I found that the training speed had little change no matter using GPU nor CPU. There are some mistake in my codes?",
"isAccepted": false,
"likes": null,
"poster": "xiaoxing_zeng"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "Are you putting the net/model in cuda? Unlike Tensorflow, here you have to specify that you want to use GPU. For small models there isn’t much difference, but for big models there difference is quite big.",
"isAccepted": false,
"likes": null,
"poster": "Ismail_Elezi"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "xiaoxing_zeng"
},
{
"contents": "Yes, you should send all the variables in cuda, but in addition you also need to send both the neural networks in cuda, in addition to the criterior (which contains the cost function). Something like: <SCODE>if opt.cuda:\n discriminator.cuda()\n generator.cuda()\n criterion.cuda()\n input, label = input.cuda(), label.cuda()\n<ECODE> might do the trick.",
"isAccepted": false,
"likes": 2,
"poster": "Ismail_Elezi"
}
] | false |
Question about programming preference | null | [
{
"contents": "when define a new net I find that I would use an object of a nn.Module class for multiple times so I just defined this object one time and use it in every nn.Module as follow: <SCODE>feature = FeatureExtracter(SUBMODEL)\n \nclass Net1(nn.Module): \n def forward(self,x):\n s_feats =feature(x,LAYER)\n ......#do something\n return ......\n\nclass Net2(nn.Module): \n def forward(self,x):\n s_feats =feature(x,LAYER)\n ......#do something\n return ......\n<ECODE> was that a bad programming habit ? or is there any reason in pytorch I shouldn’t do that, and instead define new object in every Net like this: <SCODE>class Net1(nn.Module): \n def __init__(self):\n super().__init__()\n self.feature = FeatureExtracter(SUBMODEL)\n def forward(self,x):\n s_feats = self.feature(x,LAYER)\n ......\n return ......\n\nclass Net2(nn.Module): \n def __init__(self):\n super().__init__()\n self.feature = FeatureExtracter(SUBMODEL)\n def forward(self,x):\n s_feats = self.feature(x,LAYER)\n ......\n return ......<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "LiZeng"
},
{
"contents": "Both models are fine, neither is bad programming habit.",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
Backprop Through sparse_dense_matmul | null | [
{
"contents": "Do You have a roadmap for the whole sparse module? I’m curious if you’re planning to use cuSPARSE and if there is any way to contribute to this module. I’d need a backward pass for sparse_mat @ dense_mat. What are my options? Can I write a Function using the available spmm for forward, and add some quick-and-dirty cython code to do the backward? This would mean returning a sparse grad, is this supported?",
"isAccepted": false,
"likes": null,
"poster": "elanmart"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ebetica"
},
{
"contents": "Hi, for now sparse tensor support basically evolves with the needs of our respective projects. Basic GPU support is being worked on - it will rely on cuSPARSE for some operations. returning sparse gradients from backward should work in addition to the thread @ebetica pointed to, since pytorch supports hybrid tensors (i.e. tensors with both sparse and dense dimensions), we may add a sparse * dense -> hybrid function in the future (where “hybrid” here means one sparse dimension and one dense dimension). That would probably be more efficient in this case.",
"isAccepted": false,
"likes": null,
"poster": "martinraison"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "elanmart"
}
] | false |
Variable logical operators bug? | null | [
{
"contents": "Consider the following code: <SCODE>from torch.autograd import Variable\nt = torch.Tensor([2])\nbool(torch.max(t) < 2)\nOut[4]: \nFalse\nbool(torch.max(t) < 3)\nOut[5]: \nTrue\n<ECODE> However, If you do the same with a Variable: <SCODE>bool(torch.max(v) < 2)\nOut[6]: \nTrue\nbool(torch.max(v) < 3)\nOut[7]: \nTrue\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Nadav_Bhonker"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "albanD"
},
{
"contents": "Well that makes sense. Thanks.",
"isAccepted": false,
"likes": null,
"poster": "Nadav_Bhonker"
}
] | false |
Factorizarion machines | null | [
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "Gidi_Sh"
},
{
"contents": "it is definitely not done yet (I am trying to keep tabs on user-implemented pytorch repositories)",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Gidi_Sh"
},
{
"contents": "Hi there, I recently started using PyTorch (more familiar with Keras) and here is 1st my attempt at Factorization Machine: Remark ; I implemented it using the O(k.N) formulation found in Steffen Rendle paper",
"isAccepted": false,
"likes": 3,
"poster": "mzaradzki"
},
{
"contents": "If you’re interested, I have also implemented a factorization machine in pytorch – I cythonized the forward and backward passes, and so it’s relatively fast. Definitely a work-in-progress still!",
"isAccepted": false,
"likes": 3,
"poster": "Jack_Hessel"
}
] | false |
How can I convert the result from a Conv layer to a vector | null | [
{
"contents": "<SCODE>class Discriminator(nn.Module):\n def __init__(self, nc,ndf):\n super(Discriminator, self).__init__()\n self.features = nn.Sequential(\n # input is (nc) x 64 x 64\n nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf) x 32 x 32\n nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 2),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*2) x 16 x 16\n nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 4),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*4) x 8 x 8\n nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 8),\n nn.LeakyReLU(0.2, inplace=True)\n # state size. (ndf*8) x 4 x 4\n #nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),\n #nn.Linear(ndf * 8,1),\n #nn.Sigmoid()\n )\n self.classifier= nn.Sequential(\n nn.Linear(self.inpelts,1),\n nn.Sigmoid()\n )\n def forward(self, input):\n x=self.features(input) \n x = x.view(x.size(0), -1)#B,chans*H*W\n self.inpelts=x.size(1)\n print 'x shape ',x.size()\n output=self.classifier(x)\n return output\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "rogetrullo"
},
{
"contents": "<SCODE>linear_weight = nn.Parameter(torch.randn(1, 1), requires_grad=True) # resize this in the forward function\n<ECODE> And in the forward function: <SCODE># some stuff before, where x is the input to linear\nif self.linear_weight.data.nelement() == 1:\n self.linear.weight.data.resize_(1, x.size(1)) # output_features x input_features)\n # initialize weights randomly\n self.linear.weight.data.normal_(-0.1, 0.1)\nx = nn.functional.linear(x, self.linear_weight)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
Pixelwise weights for MSELoss | null | [
{
"contents": "I would like to use a weighted MSELoss function for image-to-image training. I want to specify a weight for each pixel in the target. Is there a quick/hacky way to do this, or do I need to write my own MSE loss function from scratch?",
"isAccepted": false,
"likes": 1,
"poster": "abweiss"
},
{
"contents": "you can do this: <SCODE>def weighted_mse_loss(input, target, weights):\n out = input - target\n out = out * weights.expand_as(out)\n # expand_as because weights are prob not defined for mini-batch\n loss = out.sum(0) # or sum over whatever dimensions\n return loss\n<ECODE>",
"isAccepted": false,
"likes": 5,
"poster": "smth"
},
{
"contents": "Oh, I see. There’s no magic to the loss functions. You just calculate whatever loss you want using predefined Torch functions and then call backward on the loss. That’s super easy. Thanks!",
"isAccepted": false,
"likes": 1,
"poster": "abweiss"
},
{
"contents": "what if I want L2 Loss ? just do it like this? <SCODE>def weighted_mse_loss(input,target,weights):\n out = (input-target)**2\n out = out * weights.expand_as(out)\n loss = out.sum(0) # or sum over whatever dimensions\n return loss\n\n<ECODE> right ?",
"isAccepted": false,
"likes": 3,
"poster": "liygcheng"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "Jordan_Campbell"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "delhi_loafer"
},
{
"contents": "Same question, what is your thoughts now ?",
"isAccepted": false,
"likes": null,
"poster": "JindongJiang"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "delhi_loafer"
},
{
"contents": "Thanks. Actually, variablization must be done if we need them operate at gpu(s).",
"isAccepted": false,
"likes": null,
"poster": "JindongJiang"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "picolo"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ptrblck"
},
{
"contents": "I am trying an unsupervised approach to identify “important” areas of an image without explicitly pre-defining the weight-map. For the other question, I am not sure yet. I am thinking of a simple form like this - LW+(1/W) where L W is elementwise multiplication of learned weights and the MSE/MAE loss. and the second term prevents going too close to 0?",
"isAccepted": false,
"likes": null,
"poster": "picolo"
}
] | false |
How can I do element-wise batch matrix multiplication? | null | [
{
"contents": "I have two tensors of shape (16, 300) and (16, 300) where 16 is the batch size and 300 is some representation vector. I want to compute the element-wise batch matrix multiplication to produce a matrix (2d tensor) whose dimension will be (16, 300). So, in short I want to do 16 element-wise multiplication of two 1d-tensors. I can do this using a for loop but is there any way, I can do it using torch API?",
"isAccepted": false,
"likes": 3,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": 7,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "What is batch element-wise multiplication? Can you show me your for loop?",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "If you have tensor a and b both of shape (16, 300) and you want to get a tensor of shape (16, 300) by element-wise multiplication, I assume you can do a*b?",
"isAccepted": false,
"likes": 1,
"poster": "ZeweiChu"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": 8,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "wangg12"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "wangg12"
},
{
"contents": "we dont have auto-broadcasting yet.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "Could we use just if A is matrix and v is a vector ? It worked for me but i am not sure if the BP works fine.",
"isAccepted": false,
"likes": null,
"poster": "dpappas"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "wandering007"
},
{
"contents": "I solved like this, but maybe I am late to the party: <SCODE>def bmul(vec, mat, axis=0):\n mat = mat.transpose(axis, -1)\n return (mat * vec.expand_as(mat)).transpose(axis, -1)\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "e.pignatelli"
}
] | false |
Shared parameter model and submodel BP | null | [
{
"contents": "<SCODE>class mm(nn.Module):\n def __init__(self):\n super(mm, self).__init__()\n self.m = nn.Linear(3,2)\n self.m2 = nn.Linear(2,4)\n def forward(self, input):\n o1 = self.m(input)\n o2 = self.m2(o1)\n return o1, o2\n\nmmodel = mm()\noptimizer = optim.SGD(mmodel.parameters(), lr=1, momentum=0.8)\n#optimizer = optim.Adam(mmodel.parameters(), lr=1, betas=(0.5, 0.999))\ninput = Variable(torch.randn(1,3))\nprint(mmodel.m.weight.data)\nprint(mmodel.m2.weight.data)\nmmodel.zero_grad()\noutput1, output = mmodel(input)\noutput1.backward(torch.ones(1,2))\n#output.backward(torch.ones(1,4))\noptimizer.step()\nprint(mmodel.m.weight.data)\nprint(mmodel.m2.weight.data)\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "Marvin"
}
] | false |
Net forward success,but backward failed, somebody know why? | null | [
{
"contents": "I build a net, I used cross entropy loss, the forward is success, but the backward failed! I don’t know why it doesn’t work. <SCODE>RuntimeErrorTraceback (most recent call last)\n<ipython-input-3-c3211f22ae0b> in <module>()\n132 print \"loss: {}, train_acc: {}\".format(loss.data[0], train_acc)\n133 \n--> 134 loss.backward()\n135 opt.step()\n136 \n\n/root//lib/python2.7/site-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)\n 144 'or with gradient w.r.t. the variable')\n 145 gradient = self.data.new().resize_as_(self.data).fill_(1)\n--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n 147 \n 148 def register_hook(self, hook):\n\n/root//lib/python2.7/site-packages/torch/autograd/_functions/tensor.pyc in backward(self, grad_output)\n 307 def backward(self, grad_output):\n 308 return tuple(grad_output.narrow(self.dim, end - size, size) for size, end\n--> 309 in zip(self.input_sizes, _accumulate(self.input_sizes)))\n 310 \n 311 \n\n/root//lib/python2.7/site-packages/torch/autograd/_functions/tensor.pyc in <genexpr>((size, end))\n 306 \n 307 def backward(self, grad_output):\n--> 308 return tuple(grad_output.narrow(self.dim, end - size, size) for size, end\n 309 in zip(self.input_sizes, _accumulate(self.input_sizes)))\n 310 \n\nRuntimeError: out of range at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensor.c:367<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "songbo.han"
},
{
"contents": "<SCODE>opt = O.Adagrad(filter(lambda p: p.requires_grad,model.parameters()),lr=config[\"lr\"],)\n<ECODE> I remove one parameter in code . because I wanna use pretrained vector (glove) , is this reason?",
"isAccepted": false,
"likes": null,
"poster": "songbo.han"
},
{
"contents": "I tested it, i’t not about the parameters",
"isAccepted": false,
"likes": null,
"poster": "songbo.han"
},
{
"contents": "Hi,",
"isAccepted": false,
"likes": null,
"poster": "albanD"
},
{
"contents": "thank you for your time. I do the concat operation!but I never change the content of a Variable .",
"isAccepted": false,
"likes": null,
"poster": "songbo.han"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "albanD"
},
{
"contents": "OK, I will do a small example, the original code is too much.wait a minute",
"isAccepted": false,
"likes": null,
"poster": "songbo.han"
},
{
"contents": "this is my example! I finally reproduced it! <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport numpy as np\n\nclass Fnn2D3(nn.Module):\n def __init__(self,input_dim, hidden_dim, dp_ratio):\n super(Fnn2D3,self).__init__()\n self.out = nn.Sequential(\n nn.Dropout(dp_ratio),\n nn.Linear(input_dim, hidden_dim,bias=False),\n nn.ReLU(),\n nn.Dropout(dp_ratio),\n nn.Linear(hidden_dim, hidden_dim,bias=False),\n nn.ReLU())\n \n def forward(self, inputs):\n a,b,c = inputs.size()\n inputs = inputs.view(-1,c)\n outputs = self.out(inputs)\n outputs = outputs.view(a,b,-1)\n return outputs\n \n \nclass Mlp2(nn.Module):\n def __init__(self,input_dim, hidden_dim, output_dim,dp_ratio):\n super(Mlp2,self).__init__()\n self.out = nn.Sequential(\n nn.Dropout(dp_ratio),\n nn.Linear(input_dim,hidden_dim,bias=False),\n nn.ReLU(),\n nn.Dropout(dp_ratio),\n nn.Linear(hidden_dim,output_dim,bias=False)\n )\n \n def forward(self, inputs):\n return self.out(inputs)\nclass Example(nn.Module):\n def __init__(self):\n super(Example,self).__init__()\n self.cmp1 = Fnn2D3(600,200,0.2)\n self.cmp2 = Fnn2D3(600,200,0.2)\n self.mlp = Mlp2(400,200,3,0.2)\n \n def forward(self,a,b,c,d):\n a = self.cmp1(torch.cat((a, c), 2))\n b = self.cmp2(torch.cat((b, d), 2))\n \n a = torch.mean(a,1)\n b = torch.mean(b,1)\n print a.size()\n print b.size()\n # hypo_mpool:(batch,1,cmp_dim),prem_mpool..\n out = self.mlp(torch.cat((a,b), -1).view(5,-1))\n \n return out\n \na = Variable(torch.randn(5,30,300))\nb = Variable(torch.randn(5,23,300))\nc = Variable(torch.randn(5,30,300))\nd = Variable(torch.randn(5,23,300))\ne = Variable(torch.from_numpy(np.array([1,0,1,2,1])))\n\nopt = O.Adagrad(model.parameters(),lr=config[\"lr\"],)\n\nmodel = Example()\noutput = model(a,b,c,d)\nprint output.size()\ncriterion = nn.CrossEntropyLoss()\n\n\nloss = criterion(output,e)\nprint loss\nloss.backward()\nopt.step()<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "songbo.han"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "albanD"
},
{
"contents": "so much thank you for u!! it works!",
"isAccepted": false,
"likes": null,
"poster": "songbo.han"
}
] | false |
Convert pixel wise class tensor to image segmentation | null | [
{
"contents": "So as the label has format batch_size x 1 x height x width I can calculate cross entropy loss with: <SCODE>criterion = nn.CrossEntropyLoss()\n\n# shape: batch_size x 1 x height x width = 1 x 1 x 256 x 256\ninputs = autograd.Variable(images)\n# shape: batch_size x 1 x height x width = 1 x 1 x 256 x 256\ntargets = autograd.Variable(labels)\n# shape: batch_size x 1 x height x width = 1 x 256 x 256 x 256\noutputs = model(inputs)\n\noptimizer.zero_grad()\nloss = criterion(outputs.view(-1, 256), targets.view(-1))\nloss.backward()\noptimizer.step()\n<ECODE> I know that outputs[0][0][i][j| corresponds to the probability that the pixel at (i, j) belongs to class 1. So if want to transform outputs of shape 1 x 256 x 256 x 256 to 1 x 1 x 256 x 256 I would need to find the maximum (probability) of every pixel and assign it to the corresponding class value. I could do this manually by iterating over every class and pixel with numpy but I wonder if there is any better way using tensor operations?",
"isAccepted": false,
"likes": 3,
"poster": "bodokaiser"
},
{
"contents": "<SCODE>nllcrit = nn.NLLLoss2d(size_average=True) # need to write a functional interface for it\ndef criterion(input, target):\n return nllcrit(F.log_softmax(input), target)\n<ECODE> <SCODE>output = output.permute(0, 2, 3, 1).view(-1, ncls).squeeze()\ntarget = target.view(-1).squeeze()\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "fmassa"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "bodokaiser"
},
{
"contents": "In torch, max and argmax are computed together and returned as a tuple. So outputs.max(0)[1] is the native torch way to do this.",
"isAccepted": false,
"likes": 3,
"poster": "jekbradbury"
},
{
"contents": "Any ideas what could be wrong? This happens with both models (UNet and 1-layer conv) so I guess there must be something wrong with loss or optimization.",
"isAccepted": false,
"likes": null,
"poster": "bodokaiser"
},
{
"contents": "<SCODE> # outputs.shape =(batch_size, n_classes, img_cols, img_rows) \n outputs = outputs.permute(0, 2, 3, 1)\n # outputs.shape =(batch_size, img_cols, img_rows, n_classes) \n outputs = outputs.resize(batch_size*img_cols*img_rows, n_classes)\n labels = labels.resize(batch_size*img_cols*img_rows)\n loss = F.cross_entropy(outputs, labels)\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "meetshah1995"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "fmassa"
}
] | false |
Batch processing of sequences with different length | null | [
{
"contents": "Hello, I have came up with something like this. q1 and q2 are just list of integer word indices. <SCODE> def SimilarityModule(nn.Module)\n def __init__():\n self.embedding = nn.Embedding(...)\n ....\n def forward(self, q1, q2):\n q1_embeds = self.embeddings(q1)\n q2_embeds = self.embeddings(q2)\n #mean over whole sequence\n q1_repre = torch.mean( q1_embeds, 0 ) \n q2_repre = torch.mean( q2_embeds, 0 )\n\n dot_product = torch.dot( q1_repre, q2_repre ) / torch.norm( q1_repre, 2 ) / torch.norm( q1_repre, 2 )\n\n return dot_product\n<ECODE> I have a question how properly create a batch of the sequences with variable length and feed it to the module? Thank you in advance for help!",
"isAccepted": false,
"likes": null,
"poster": "octopusyo"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "smth"
},
{
"contents": "Thank you , Is there any way to work with “truly” variable length sequences? For instance, I would want to have a CNN with a RNN on top, I would also need masking?",
"isAccepted": false,
"likes": null,
"poster": "octopusyo"
},
{
"contents": "You would use padding in the CNN, then take the output and pass it to pack_padded_sequence for the RNN.",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "octopusyo"
},
{
"contents": "Masking is mandatory but the PyTorch RNNs natively support variable length sequences (created by pack_padded_sequence) and will correctly avoid processing the padding tokens, even for the reverse direction of a bidirectional RNN.",
"isAccepted": false,
"likes": 2,
"poster": "jekbradbury"
},
{
"contents": "Could you plz point to some example where variable length sequences are being passed in a batch",
"isAccepted": false,
"likes": 1,
"poster": "aradhyamathur"
},
{
"contents": "I do not understand what is really meant by “masking” - could anyone explain please?",
"isAccepted": false,
"likes": null,
"poster": "josmi9966"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "flyaway"
}
] | false |
(Newbie Question) Getting the gradient of output with respect to the input | null | [
{
"contents": "Hello all, I’m new to using neural network libraries, so I’m sorry if this is a stupid question. I have a pretrained network with a 28x28 input(MNIST) image and 10 outputs. I want to get the gradient of one of those outputs wrt the input. Thank you very much in advance!",
"isAccepted": false,
"likes": null,
"poster": "ckanbak"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "DiffEverything"
},
{
"contents": "<SCODE>import torch\na = torch.autograd.Variable(torch.Tensor(10,10))\nprint(a.requires_grad) # prints False\n<ECODE>",
"isAccepted": false,
"likes": 2,
"poster": "albanD"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "DiffEverything"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ckanbak"
},
{
"contents": "Hi, <SCODE>for digit in selected_digits:\n output[digit].backward(retrain_variables=True)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "albanD"
},
{
"contents": "<SCODE>net1.zero_grad(); net2.zero_grad()\nx = Variable(torch.ones((1,20)), requires_grad=True)\ny = net1(x)\nz = net2(y)\nz.backward(torch.ones(z.size()),retain_variables=True)\nprint(x.grad) # this will give non-zero value\nprint(y.grad) # this is all zero\n\nnet1.zero_grad(); net2.zero_grad()\ny = torch.from_numpy(y.data.numpy())\ny = Variable(y, requires_grad=True)\nz = net2(y)\nz.backward(torch.ones(z.size()),retain_variables=True)\nprint(y.grad) # this will give non-zero value<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Guan-Horng_Liu"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "Hi, would this still be the preferred approach in version 1.0, I can’t seem to get retrain_variable=True to work and am trying to do the same thing Thanks John",
"isAccepted": false,
"likes": null,
"poster": "fromLittleAcorns"
},
{
"contents": "Hi,",
"isAccepted": false,
"likes": null,
"poster": "albanD"
},
{
"contents": "If I do this will the gradients coming out of input increment each time or will they be overwritten. I read that gradients are retained and only cleared when the model is zero grad is called. If they are incremental then can I get each individual value but taking the difference of the previous value and the latest? Thanks again",
"isAccepted": false,
"likes": null,
"poster": "fromLittleAcorns"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "albanD"
}
] | false |
Wasserstein loss layer/criterion | null | [
{
"contents": "Are there any plans for an (approximate) Wasserstein loss layer to be implemented - or maybe its already out there? The theory’s and implementation is a little bit beyond my superficial understanding, (Appendix D), but it seems quite impressive!",
"isAccepted": false,
"likes": 2,
"poster": "AjayTalati"
},
{
"contents": "Yes I think that’s their particular application in the paper - but it could be more general than that? You might have seen this Python optimal transport library, I’m guessing that’s what they’ve implemented as their layer? Seems to be a solid piece of theory?",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "From what I understand, the POT library solves 4.1 (Entropic regularization of the Wasserstein distance, say W(p,q) ), deriving the gradient in 4.2 and the relaxation in 4.3 (first going to W(p_approx,q_approx)+DKL(p_approx,p)+DKL(q_approx,q) and then generalising DKL to allow p/q approx to not be distributions seems to go beyond that. Funny that they use the difference between MNIST class labels as a metric for the target. That reminded me of your regression approach.",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "To be honest, I’m not too sure how to use the POT library yet - but if you want to play around in Mocha, here’s the test of the Wasserstein layer, and just for the sake of completness, here’s the code to go with the original paper “Sinkhorn Scaling for Optimal Transport” That’s probably the best place to start! Hopefully I’ll be able to make some sense of it all soon?",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "<SCODE>Performance on Validation Set after 10000 iterations\nAccuracy (avg over 1000) = 95.9%\" \n<ECODE> or so? float32 does not seem to provide the precision necessary to implement unmodified sinkhorn algorithm, at least in the Python Optimal Transport’s 1-d-OT example. The log-stabilized sinkhorn algorithm seems to work better at first sight. In the example, the difference between unregularized and regularized is ~1e6 and the difference between numpy@float64 and pytorch@float32 is ~1e-7. Interestingly, the mocha code seems to implement the unstabilized algorithm (unless they are doing the stabilization elsewhere). I’ll see about making it into a layer on another day.",
"isAccepted": false,
"likes": 1,
"poster": "tom"
},
{
"contents": "Wow, that was quick! Yes I get about the same, <SCODE>22-Mar 21:36:03:INFO:root:## Performance on Validation Set after 10000 iterations\n22-Mar 21:36:03:INFO:root:---------------------------------------------------------\n22-Mar 21:36:03:INFO:root: Accuracy (avg over 1000) = 95.8000%\n22-Mar 21:36:03:INFO:root:---------------------------------------------------------\n<ECODE> I’m still going over Marco Cuturi’s Matlab code - implementing a Wasserstein layers quite a long term project - unless you’ve got an immediate application in mind? For the time being I’m content with just understand it mathematically. It does seem to have a lot of potential if you want to train a network to give fast approximations to - existing slow simulation algorithms, or algorithms that currently calculate a Wasserstein metric using a linear program - there’s a lot of things like this in scientific computing - that’s actually the application I’ve got in mind. Chiyuan Zhang (pluskid) has already applied this to computationally intensive geological simulations, So basically - once your nets trained - you can run it forward in batch mode, and get output in second, which previously would have taken hours. Here’s a quote from the peper,",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "Anyway here’s the link, \"Stochastic Optimization for Large-scale Optimal Transport \"",
"isAccepted": false,
"likes": 1,
"poster": "AjayTalati"
},
{
"contents": "I found the code for “Stochastic Optimization for Large-scale Optimal Transport”",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "and the corresponding code (which is awfully simple):",
"isAccepted": false,
"likes": 2,
"poster": "smth"
},
{
"contents": "They should all roughly give the same value! It should be pretty simple to do the test between the different methods - here’s the example code from case 0) <SCODE>>>> from pyemd import emd\n>>> import numpy as np\n>>> first_signature = np.array([0.0, 1.0])\n>>> second_signature = np.array([5.0, 3.0])\n>>> distance_matrix = np.array([[0.0, 0.5], [0.5, 0.0]])\n>>> emd(first_signature, second_signature, distance_matrix)\n3.5\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "In some applications Wasserstein distances beyond W_1=EMD might be interesting (e.g. the W_2 inner product might be handy sometimes). The Wasserstein GAN paper and method is awesome, but I am not quite certain that the GAN distance does actually approximate the W_1 distance: As the authors point out there is the issue whether the supremum is actually attained in the test set of the maximization, (not sure how that compares with the discretization you have to do before using Sinkhorn etc., the linked paper Genevay et al paper kernelizes for the continuous case). The other thing about the Wasserstein GAN is that the maximizer of equation (3), even if the maximizer of equation (2) is contained in W, will not be a scaled version of the latter. This is not terribly relevant to the ends of the article as you still get the a good norm, so the authors do well to only briefly mention it. On the other hand it would be relevant if you use the W_1 distance for something where you need the W_1 distance itself, so you would need to compute the Lipschitz constant in the maximization procedure and divide by it in the quantity maximized in (3), i.e. for the candidates and not just the maximum.",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "Anyway, starting with the easy stuff - at the moment I can’t get the entropy regularised version in, <SCODE><T,M> = trace(T'M) \n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "<SCODE>\nimport numpy as np\nimport matplotlib.pylab as pl\nimport pyemd \nimport ot\nfrom ot.datasets import get_1D_gauss as gauss\n\n#%% parameters\n\nn=100 # nb bins\n\n# bin positions\nx=np.arange(n,dtype=np.float64)\n\n# Gaussian distributions\na=gauss(n,m=20,s=5) # m= mean, s= std\nb=gauss(n,m=60,s=10)\n\n# loss matrix\nM=ot.dist(x.reshape((n,1)),x.reshape((n,1)))\nM/=M.max()\n\n#%% EMD with PyEMD library\n\ndiv, T_pyemd = pyemd.emd_with_flow(a,b,M)\nT_pyemd = np.array(T_pyemd)\n\n#%% EMD with POT library\n\nT_pot = ot.emd(a,b,M)\n\n#%% Sinkhorn with POT library\n\nlambd=1e-3\nT_sink = ot.sinkhorn(a,b,M,lambd)\n\n#%% plot each of the transport matrices\n\not.plot.plot1D_mat(a,b,T_pyemd,'Exact OT matrix using PyEMD')\npl.show()\n\not.plot.plot1D_mat(a,b,T_pot,'Exact OT matrix using POT')\npl.show()\n\not.plot.plot1D_mat(a,b,T_sink,'Sinkhorn OT matrix using POT')\npl.show()\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "To compute the distance, you need to integrate: <SCODE>dist=(M*T).sum()\n<ECODE> Something seems to be up with pyemd, though: If you use your example but with larger stddev (e.g. 10 and 30), the “optimal” plan pyemd emits here clearly is broken (as the monotone transport function gives the optimal plan and I see two distinct areas)…",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "Hey thanks a lot for that! I think you’ve found something! I do think PyEMD is a bit finicky, but the distances seem to be roughly the same, <SCODE>>>> #%% EMD with PyEMD library\n... \n>>> div_pyemd, T_pyemd = pyemd.emd_with_flow(a,b,M)\n>>> T_pyemd = np.array(T_pyemd)\n>>> div_pyemd\n0.16684170730400028\n>>> \n>>> #%% EMD with POT library\n... \n>>> T_pot = ot.emd(a,b,M)\n>>> div_pot_emd = np.trace( np.matmul( T_pot.transpose() , M) )\n>>> div_pot_emd\n0.16580581101906636\n<ECODE> I guess then, we should go with the POT library, as that’s probably more reliable?",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "<SCODE>\n#%% Use the downhill simplex method to find lambd\n \nimport numpy as np\nfrom scipy.optimize import minimize\n\nT_pot = ot.emd(a,b,M)\ndiv_pot_emd = np.trace( np.matmul( T_pot.transpose() , M) )\n\n#objective functon\ndef tune_lambda(x):\n #%% Sinkhorn approximate EMD using POT library\n T_sink = ot.sinkhorn(a,b,M,x)\n div_pot_sink = np.trace( np.matmul( T_sink.transpose() , M) )\n error = abs(div_pot_emd - div_pot_sink)\n return error\n\nx0 = 1e-3 # initial guess\n\nres = minimize(tune_lambda, x0, method='nelder-mead', options={'xtol': 1e-8, 'disp': True})\n\nprint(res.x)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "Check out figure 4, from Marco Cuturi’s original paper, <SCODE>#-----------------------------------------------------------------------------------------\n#%% Speed Tests - see https://github.com/rflamary/POT/blob/master/examples/plot_OT_1D.py\n#-----------------------------------------------------------------------------------------\n\nimport numpy as np\nimport pyemd \nimport ot\nimport functools\nfrom time import time\nfrom ot.datasets import get_1D_gauss as gauss\n\n#@functools.lru_cache (maxsize=None) # hash(tuple( yadda, yadda\ndef pyemd_linprog(n,a,b,M):\n start = time()\n for i in range(n): \n div_pyemd = pyemd.emd(a,b,M)\n run_time = time() - start\n print(run_time)\n\n#@functools.lru_cache (maxsize=None)\ndef pot_sinkhorn(n,a,b,M,lambd):\n start = time()\n for i in range(n): \n T_sink = ot.sinkhorn(a,b,M, lambd)\n #div_pot_sink = np.trace( np.matmul( T_sink.transpose() , M) )\n run_time = time() - start\n print(run_time)\n\n#%% parameters\n\nn=128 # nb bins\n\n# bin positions\nx=np.arange(n,dtype=np.float64)\n\n# Gaussian distributions\na=gauss(n,m=20,s=5) # m= mean, s= std\nb=gauss(n,m=60,s=10)\n\n# loss matrix\nM=ot.dist(x.reshape((n,1)),x.reshape((n,1)))\nM/=M.max()\n\n##--------\nruns=1000\nlambd = 1\n\n>>> pyemd_linprog( runs, a, b, M )\n21.02236819267273\n>>> pot_sinkhorn( runs, a, b, M, lambd )\n2.0207273960113525\n\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "I’m not terribly impressed by the numerical stability, I’ll have to look into that. Best regards Thomas",
"isAccepted": false,
"likes": 1,
"poster": "tom"
},
{
"contents": "Wow !!! That’s pretty amazing how quickly you did that !!! Very, very impressive. ot.lp.emd : Unregularized OT So now that you have a Wasserstein loss that you can backprop through maybe you want to train a plain vanilla GAN with it, it would be interesting to see how it compares, here’s a quite basic version that’s new, and converges quite fast, Be interesting if you could use your loss layer to improve it? Best regards, Ajay",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "Hi Tom, Your implementations fine, (thanks once again for trying it), I’m guessing the problems simply the inherent instability of the different versions of the SK algorithm?? To put it simply, if the linear program emd algorithm ranks, A and B, closer than A and C, then any approximate algorithm, (eg Sinkhorn-Knopp), should also give the same relative ranking. I’m also trying it with discrete distributions, i.e. the histograms used in the original emd paper, but it works much better with Gaussians. All the best, Ajay",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
}
] | false |
Custom Function - Open questions | null | [
{
"contents": "So I have a few questions for implementing a custom functions. We need to do something different than gradient descend thus we try to to implement something within the forward/backward mode of pytorch, but calculating more than gradients. However, currently on a very simple example, pytorch just crashes with: <SCODE>Traceback (most recent call last):\n File \"/home/alex/work/python/nn-second-order/bin/test.py\", line 94, in <module>\n loss.backward()\n File \"/usr/local/lib/python3.4/dist-packages/torch/autograd/variable.py\", line 145, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\nRuntimeError: could not compute gradients for some functions\n<ECODE> The code we are using looks like this: <SCODE>import torch\nfrom torch.autograd import Variable\n\ntorch.nn.Linear\n\n\nclass Gn_SquareLoss(torch.autograd.Function):\n def forward(self, x, y):\n self.save_for_backward(x, y)\n diff = (x - y)\n return diff.pow(2)\n\n def backward(self, grad_output):\n # N x D\n x, y = self.saved_tensors\n diff = (x - y) * 2\n # N x D\n grad_input = grad_output.clone()\n # N x D\n g = [grad_input * diff, grad_input * diff]\n d = g[0].size()[1]\n # D x D\n m = [torch.eye(d, d) * 2, torch.eye(d, d) * 3]\n # (N+D) x D\n s1 = torch.cat((m[0], g[0]), 0)\n # (N+D) x D\n s2 = torch.cat((m[0], g[0]), 0)\n return s1, s2\n\n\nclass Gn_Dot(torch.autograd.Function):\n def forward(self, x, w):\n self.save_for_backward(x, w)\n return torch.mm(x, w)\n\n def backward(self, grad_output):\n # N x H, H x D\n x, w = self.saved_tensors\n # (N + D) x D\n grad_input, = grad_output.clone()\n d = grad_input.size()[1]\n # D x D\n m = grad_input[:d]\n # N x D\n g = grad_input[d:]\n # N x H\n dx = torch.mm(g, torch.transpose(w, 0, 1))\n # H x D\n dw = torch.mm(torch.transpose(x, 0, 1), g)\n # D x H\n m = torch.mm(m, torch.transpose(w, 0, 1))\n # (D + N) x H\n dx = torch.cat((m, dx), 0)\n return dx, dw\n\n\nclass Gn_Tanh(torch.autograd.Function):\n def forward(self, x):\n self.save_for_backward(x)\n return torch.tanh(x)\n\n def backward(self, grad_output):\n x, = self.saved_tensors\n grad_input = grad_output.clone()\n d = grad_input.size()[1]\n m = grad_input[:d]\n g = grad_input[d:]\n dx = torch.cat((m, g * (1 - torch.tanh(x).pow(2))), dimension=0)\n return dx\n\ndtype = torch.FloatTensor\n# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU\n\n# N is batch size; D_in is input dimension;\n# H is hidden dimension; D_out is output dimension.\nN, D_in, H, D_out = 64, 1000, 100, 10\n\n# Create random Tensors to hold input and outputs, and wrap them in Variables.\nx = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False)\ny = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)\n\n# Create random Tensors for weights, and wrap them in Variables.\nw1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)\nw2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)\n\naffine = Gn_Dot()\ntanh = Gn_Tanh()\nsquare_loss = Gn_SquareLoss()\n\nlearning_rate = 1e-6\nfor t in range(500):\n # Forward pass: compute predicted y using operations on Variables; we compute\n # ReLU using our custom autograd operation.\n y1 = affine(x, w1)\n h1 = tanh(y1)\n y2 = affine(h1, w2)\n loss1 = (y - y2).pow(2).sum()\n loss = square_loss(y2, y).sum()\n print(t, loss.data[0], loss1.data[0])\n\n # Manually zero the gradients before running the backward pass\n w1.grad.data.zero_()\n w2.grad.data.zero_()\n\n # Use autograd to compute the backward pass.\n loss.backward()\n\n # Update weights using gradient descent\n w1.data -= learning_rate * w1.grad.data\n w2.data -= learning_rate * w2.grad.data\n<ECODE> Note that in order to satisfy pytorch’s requirement to have the number of gradients outputed same as inputs, we concatenate the extra thing we calculate (for testing purposes this is identity matrix) and then in each layer we peel them out. Now the main issue is that this should work, however the error pytorch is giving us in a mystery why? My guess is something is happening in the C API, but I do not know what.",
"isAccepted": false,
"likes": null,
"poster": "botev"
},
{
"contents": "There are plenty of small bugs in your code. If you replace the functions by <SCODE>affine = torch.mm\ntanh = torch.tanh\nsquere_loss = torch.nn.MSELoss()\n<ECODE> and one at a time change to your function, you will have better error messages. in your case, most of the time you want to do def backward(self, grad_output) instead of def backward(self, *grad_output)\n you should unpack the saved tensors (it is a tuple, not a tensor), so x, = self.saved_tensors\n you are getting the size of grad_input.size()[1], but then using it to index the 0th dimension. Is that right? It’s interesting though that the error messages get lost when everything is put together,",
"isAccepted": false,
"likes": 1,
"poster": "fmassa"
},
{
"contents": "Thanks for the fast reply, and I’m always happy to learn! Hope this make it clear.",
"isAccepted": false,
"likes": null,
"poster": "botev"
}
] | false |
Autograd Function vs nn.Module? | null | [
{
"contents": "Thanks!",
"isAccepted": false,
"likes": 3,
"poster": "Shihan_su"
},
{
"contents": "Hi,",
"isAccepted": false,
"likes": 1,
"poster": "albanD"
}
] | false |
How to manipulate layer parameters by it’s names? | null | [
{
"contents": "So how can I set one specific layer’s parameters by the layer name, say “conv3_3” ? In pytorch I get the model parameters via: <SCODE>params = list(model.parameters())\n\nfor p in params:\n print p.size()\n<ECODE> But how can I get parameter according to a layer name and then change its values? <SCODE>caffe_params = caffe_model.parameters()\n\ncaffe_params['conv3_1'] = np.zeros((64, 128, 3, 3))\n<ECODE>",
"isAccepted": true,
"likes": 5,
"poster": "zeakey"
},
{
"contents": "",
"isAccepted": true,
"likes": 23,
"poster": "smth"
},
{
"contents": "<SCODE>optimizer = optim.Adam([param for name, param in model.state_dict().iteritems()\n if 'foo' in name], lr=args.lr)\n<ECODE>",
"isAccepted": true,
"likes": 2,
"poster": "solidor"
},
{
"contents": "",
"isAccepted": true,
"likes": 4,
"poster": "solidor"
},
{
"contents": "Hi Smth, Is there some way I can name a layer using a custom name. Thanks.",
"isAccepted": true,
"likes": null,
"poster": "Nan_Meng"
},
{
"contents": "",
"isAccepted": true,
"likes": 4,
"poster": "chsasank"
},
{
"contents": "thank you so much !!!",
"isAccepted": true,
"likes": null,
"poster": "Nan_Meng"
},
{
"contents": "\nnn.Module.named_parameters() does not contain parameters like batch_norm’s running_mean and running_var\n \nnn.Module.parameters() does not contain parameter names \nmodel.state_dict() does not contain parameters that we can update requires_grad property",
"isAccepted": true,
"likes": 26,
"poster": "Emrah_Yigit"
},
{
"contents": "<SCODE>for name, param in model.named_parameters():\n param.requires_grad = False```<ECODE>",
"isAccepted": true,
"likes": 1,
"poster": "An_Tran"
},
{
"contents": "<SCODE># https://discuss.pytorch.org/t/how-to-get-the-module-names-of-nn-sequential/39682\n# looping through modules but get the one with a specific name\n\nimport torch\nimport torch.nn as nn\n\nfrom collections import OrderedDict\n\nparams = OrderedDict([\n ('fc0', nn.Linear(in_features=4,out_features=4)),\n ('ReLU0', nn.ReLU()),\n ('fc1L:final', nn.Linear(in_features=4,out_features=1))\n])\nmdl = nn.Sequential(params)\n\n# throws error\n# mdl['fc0']\n\nfor m in mdl.children():\n print(m)\n\nprint()\n\nfor m in mdl.modules():\n print(m)\n\nprint()\n\nfor name, m in mdl.named_modules():\n print(name)\n print(m)\n\nprint()\n\nfor name, m in mdl.named_children():\n print(name)\n print(m)\n<ECODE>",
"isAccepted": true,
"likes": null,
"poster": "Brando_Miranda"
},
{
"contents": "Just to be sure, this will replace the current module with this new module if the name is the same?",
"isAccepted": true,
"likes": null,
"poster": "whanafy"
},
{
"contents": "Just wanted to know, have you found any solution to this problem?",
"isAccepted": true,
"likes": null,
"poster": "gilbertocunha"
},
{
"contents": "For example, to scale a specific layer by a scalar c I did the following: <SCODE>weight_name = 'fc2.weight'\nc = 100\nfor name, param in model.named_parameters():\n if param.requires_grad:\n if name == weight_name:\n param.data = c*param.data\n<ECODE>",
"isAccepted": true,
"likes": null,
"poster": "Alon41"
}
] | true |
How to exclude Embedding layer from Model.parameters()? | null | [
{
"contents": "I use an embedding layer to project one-hot indices to continuous space. However, during the training, I don’t want to update the weight of it. How could I do that?",
"isAccepted": false,
"likes": null,
"poster": "ShawnGuo"
},
{
"contents": "you can set the weight of the embedding layer to not require grad. <SCODE>m = nn.Embedding(...)\nm.weight.requires_grad=False\n<ECODE>",
"isAccepted": false,
"likes": 4,
"poster": "smth"
},
{
"contents": "Oh, I see. Thank you very much.",
"isAccepted": false,
"likes": null,
"poster": "ShawnGuo"
},
{
"contents": "<SCODE>ValueError: optimizing a parameter that doesn't require gradients\n<ECODE> The optimizer is used in the following way: <SCODE>self.optimizer = optim.Adadelta(self.model.parameters(), lr=args.learning_rate)\n<ECODE> And, model is defined as follow: <SCODE>class DecomposableModel(nn.Module):\n def __init__(self, word_embedding, config):\n super(DecomposableModel, self).__init__()\n self.name = 'DecomposableModel'\n\n self.drop_p = config['drop_p']\n\n self.word_dim = word_embedding.embeddings.size(1)\n self.embedding = nn.Embedding(word_embedding.embeddings.size(0), self.word_dim)\n self.embedding.weight = nn.Parameter(word_embedding.embeddings)\n self.embedding.weight.requires_grad = False\n # self.embedding_normalize()\n\n self.F = nn.Linear(self.word_dim, config['F_dim'])\n self.G = nn.Linear(2 * self.word_dim, config['G_dim'])\n self.H = nn.Linear(2 * config['G_dim'], config['relation_num'])\n\n self.cuda_flag = config['cuda_flag']\n\n def forward(self, p_ids, h_ids):\n ......\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "ShawnGuo"
},
{
"contents": "Hi,",
"isAccepted": false,
"likes": null,
"poster": "albanD"
}
] | false |
When backward() can be safely omitted? | null | [
{
"contents": "Hi, I am trying to implementing a customized loss function as follows, can it work without a customized backward routine ? Actually, how do I know if the operation can be supported by autograd automatically so that I don’t need to specify backward pass? <SCODE>class Loss(torch.autograd.Function):\n\t'''\n\tImplement the loss function from output from RNN.\n\tRef paper: https://arxiv.org/abs/1308.0850\n\t'''\n\tdef __init__(self):\n\t\t'''\n\t\tx is sequence of coordinates with dim (batch, seq_length, 3).\n\t\tParameters are sequence of output from rnn with dim (batch, seq_length, 128).\n\t\t'''\n\t\tself.e = [] \t # predicted end of stroke probability scalar\n\t\tself.m1 = [] # vector of means for x1 with len 20\n\t\tself.m2 = [] # vector of means for x2 with len 20\n\t\tself.pi = [] # vector of mixture density network coefficients with len 20\n\t\tself.rho = [] # vector of correlation with len 20\n\t\tself.s1 = [] # vector of standard deviation for x1 with len 20\n\t\tself.s2 = [] # vector of standard deviation for x2 with len 20\n\t\tself.x1 = [] # x1 coordinate at t+1\n\t\tself.x2 = [] # x2 coordinates at t + 1\n\t\tself.et = [] # end of probability indicator from ground truth\n\t\tself.parameters = []\n\t\tself.batch = 0 #batch size\n\t\tself.seq_length = 0 # reduce by 1 because loss is caculated at t+1 timestamp\n\t\n\tdef forward(self, x, para):\n\t\t''' \n\t\tImplement eq 26 of ref paper for single time stamp.\n\t\t'''\n\t\tself.save_for_backward(para)\n\t\ttotal_loss = 0\n\t\tfor i in range(self.seq_length):\n\t\t\t# prepare parameters\n\t\t\tself.__get_para(i, x, para)\n\t\t\tnormalpdf = self.__para2normal(self.x1, self.x2, self.m1, self.m2, self.s1, self.s2, self.rho) #dim (n_batch, 20)\n\t\t\tsingle_loss = self.__singleLoss(normalpdf)\n\t\t\ttotal_loss += single_loss\n\t\treturn total_loss\n\t\n\tdef __get_para(self, i, x, para):\n\t\t'''\n\t\tSlice and process parameters to the right form.\n\t\tImplementing eq 18-23 of ref paper.\n\t\t'''\n\t\tself.batch = torch.size(x)[0]\n\t\tself.e = torch.sigmoid(-para[:,i,0]) # eq 18\n\t\tself.parameters = para\n\t\tself.seq_length = torch.size(x)[1] -1 # reduce by 1 because loss is caculated at t+1 timestamp\n\t\n\t\t# slice remaining parameters and training inputs\n\t\tself.pi, self.m1, self.m2, self.s1, self.s2, self.rho = torch.split(self.parameters[:,i,1:], 6, dim = 1)\n\t\tself.x1 = x[:,i+1,0]\n\t\tself.x2 = x[:,i+1,1]\n\t\tself.et = x[:,i+1,2]\n\t\t\n\t\t## process parameters\n\t\t# pi\n\t\tmax_pi = torch.max(self.pi, dim = 1)[0]\n\t\tmax_pi = max_pi.expand_as(self.pi)\n\t\tself.pi.sub_(max_pi)\n\t\tred_sum = torch.sum(self.pi, dim = 1).expand_as(self.pi)\n\t\tself.pi.div_(red_sum)\n\t\n\t\t# sd\n\t\tself.s1.exp_()\n\t\tself.s2.exp_()\n\t\n\t\t# rho\n\t\tself.rho.tanh_()\n\t\n\t\t# reshape ground truth x1, x2 to match m1, m2 because broadcasting is currently not supported by pytorch\n\t\tself.x1.expand_as(self.m1)\n\t\tself.x2.expand_as(self.m2)\n\n\t\n\tdef __para2normal(self, x1, x2, m1, m2, s1, s2, rho):\n\t\t'''\n\t\tImplement eq 24, 25 of ref paper.\n\t\t'''\n\t\tnorm1 = x1.sub(m1)\n\t\tnorm2 = x2.sub(m2)\n\t\ts1s2 = torch.mul(s1, s2)\n\t\tz = torch.pow(torch.div(norm1, s1), 2) + torch.pow(torch.div(norm2, s2), 2) - \\\n\t\t\t2*torch.div(torch.mul(pho, torch.mul(norm1, norm2)), s1s2)\n\t\tnegRho = 1 - torch.pow(rho, 2)\n\t\texpPart = torch.exp(torch.div(-z, torch.mul(negRho, 2)))\n\t\tcoef = 2*np.pi*torch.mul(s1s2, torch.sqrt(negRho))\n\t\tresult = torch.div(expPart, coef)\n\t\treturn result\n\n\t\t\n\tdef __singleLoss(self, normalpdf):\n\t\t'''\n\t\tCalculate loss for single time stamp.\n\t\tInput: normalpdf (n_batch, 20).\n\t\t'''\n\t\tepsilon = 1e-20 # floor of loss from mixture density component since initial loss could be zero\n\t\tmix_den_loss = torch.mul(self.pi, normalpdf)\n\t\tred_sum_loss = torch.sum(mix_den_loss) # sum for all batch\n\t\tend_loss = torch.mul(self.e, self.et) + torch.mul(1-self.e, 1 - self.et)\n\t\ttotal_loss = -torch.log(tf.max(red_sum_loss, epsilon)) - torch.log(end_loss)\n\t\treturn total_loss/self.batch\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Shihan_su"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
No broadcasting for Variable? | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "xwgeng"
},
{
"contents": "Thanks for the suggestion. Also, what is the reason that broadcasting is not supported?",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "xwgeng"
}
] | false |
Operation between Tensor and Variable | null | [
{
"contents": "Suppose I do this: <SCODE>x = Variable(torch.rand(3,5))\ny = torch.Tensor(torch.rand(3,5))\nprint(x + y)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "JerryLin"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "<SCODE>In [2]: y = torch.autograd.Variable(torch.Tensor(4))\nIn [3]: y.requires_grad\nOut[3]: False<ECODE>",
"isAccepted": false,
"likes": 4,
"poster": "vabh"
},
{
"contents": "I also have question about this. Why is it designed like this?",
"isAccepted": false,
"likes": null,
"poster": "maggie"
}
] | false |
Linear combination of feature maps | vision | [
{
"contents": "Hi, I want do to some linear combination of feature maps. How do I do this in pytorch?",
"isAccepted": false,
"likes": null,
"poster": "zhoubinxyz"
},
{
"contents": "I just realize that this operation is a convolution of kernel size 1x1.",
"isAccepted": false,
"likes": null,
"poster": "zhoubinxyz"
}
] | false |
Need help on data normalization | vision | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jinwoo_choi"
},
{
"contents": "or similar function of image.lcn on torch",
"isAccepted": false,
"likes": null,
"poster": "jinwoo_choi"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
Implementation of Batch Renormalization fails unexpectedly (Segementation fault core dumped) | null | [
{
"contents": "This is my implementation of Batch Renormalization. <SCODE>class BatchReNorm1d(Module):\n\n def __init__(self, num_features, eps=1e-5, momentum=0.1, rmax=3.0, dmax=5.0, affine=True):\n super(BatchReNorm1d, self).__init__()\n self.num_features = num_features\n self.affine = affine\n self.eps = eps\n self.momentum = momentum\n self.rmax = rmax\n self.dmax = dmax\n if self.affine:\n self.weight = Parameter(torch.Tensor(num_features))\n self.bias = Parameter(torch.Tensor(num_features))\n else:\n self.register_parameter('weight', None)\n self.register_parameter('bias', None)\n self.register_buffer('running_mean', torch.zeros(num_features))\n self.register_buffer('running_var', torch.ones(num_features))\n self.register_buffer('r', torch.ones(1))\n self.register_buffer('d', torch.zeros(1))\n self.reset_parameters()\n\n def reset_parameters(self):\n self.running_mean.zero_()\n self.running_var.fill_(1)\n self.r.fill_(1)\n self.d.zero_()\n if self.affine:\n self.weight.data.uniform_()\n self.bias.data.zero_()\n\n def _check_input_dim(self, input):\n if input.size(1) != self.running_mean.nelement():\n raise ValueError('got {}-feature tensor, expected {}'\n .format(input.size(1), self.num_features))\n\n def forward(self, input): \n self._check_input_dim(input)\n if self.training:\n sample_mean = torch.mean(input, dim=0)\n sample_var = torch.var(input, dim=0)\n \n self.r = torch.clamp(sample_var.data / self.running_var, \n 1./self.rmax, self.rmax)\n self.d = torch.clamp((sample_mean.data - self.running_mean)/ self.running_var,\n -self.dmax, self.dmax)\n \n input_normalized = (input - sample_mean.expand_as(input))/sample_var.expand_as(input)\n input_normalized = input_normalized*Variable(self.r).expand_as(input)\n input_normalized += Variable(self.d).expand_as(input)\n \n self.running_mean += self.momentum * (sample_mean.data - self.running_mean)\n self.running_var += self.momentum * (sample_var.data - self.running_var)\n \n if self.affine:\n input_normalized = input_normalized * self.weight.expand_as(input)\n input_normalized += self.bias.unsqueeze(0).expand_as(input)\n return input_normalized\n \n else:\n return input_normalized\n# else:\n# input_normalized = (input - self.running_mean.expand_as(input))/self.running_var.expand_as(input)\n# if self.affine:\n# return input_normalized * self.weight.expand_as(input) + self.bias.expand_as(inputs)\n# else:\n# return input_normalized\n \n def __repr__(self):\n return ('{name}({num_features}, eps={eps}, momentum={momentum},'\n ' affine={affine})'\n .format(name=self.__class__.__name__, **self.__dict__))\n<ECODE> When I forward through it, using a toy model, my IPython kernel dies without any error? What could be the possible explanation for this behaviour? The strange thing is: the kernel dies at different points for different iterations",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "Below is the output from my terminal <SCODE>(pytorch) zafar@inspiron:~/Desktop$ gdb --args python batchrenorm.py\nGNU gdb (Ubuntu 7.11.1-0ubuntu1~16.04) 7.11.1\nCopyright (C) 2016 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>.\nFind the GDB manual and other documentation resources online at:\n<http://www.gnu.org/software/gdb/documentation/>.\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\"...\nReading symbols from python...(no debugging symbols found)...done.\n(gdb) r\nStarting program: /home/zafar/pytorch/bin/python batchrenorm.py\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n[New Thread 0x7ffff3c06700 (LWP 15536)]\n[New Thread 0x7ffff3405700 (LWP 15537)]\n[New Thread 0x7fffeec04700 (LWP 15538)]\n[New Thread 0x7fffec403700 (LWP 15539)]\n[New Thread 0x7fffe9c02700 (LWP 15540)]\n[New Thread 0x7fffe9401700 (LWP 15541)]\n[New Thread 0x7fffe4c00700 (LWP 15542)]\nFiles already downloaded and verified\nFiles already downloaded and verified\n[Thread 0x7fffe9401700 (LWP 15541) exited]\n[Thread 0x7fffe9c02700 (LWP 15540) exited]\n[Thread 0x7fffec403700 (LWP 15539) exited]\n[Thread 0x7fffeec04700 (LWP 15538) exited]\n[Thread 0x7ffff3405700 (LWP 15537) exited]\n[Thread 0x7ffff3c06700 (LWP 15536) exited]\n[Thread 0x7fffe4c00700 (LWP 15542) exited]\n[New Thread 0x7fffe4c00700 (LWP 15548)]\n[New Thread 0x7fffe9401700 (LWP 15549)]\n[New Thread 0x7fffe9c02700 (LWP 15550)]\n[New Thread 0x7fffec403700 (LWP 15551)]\n[New Thread 0x7ffff3961980 (LWP 15552)]\n[New Thread 0x7ffff3560a00 (LWP 15553)]\n[New Thread 0x7ffff315fa80 (LWP 15554)]\n\nThread 1 \"python\" received signal SIGSEGV, Segmentation fault.\nmalloc_consolidate (av=av@entry=0x7ffff7bb4b20 <main_arena>) at malloc.c:4179\n4179\tmalloc.c: No such file or directory.\n(gdb) bt\n#0 malloc_consolidate (av=av@entry=0x7ffff7bb4b20 <main_arena>)\n at malloc.c:4179\n#1 0x00007ffff78710a8 in _int_free (av=0x7ffff7bb4b20 <main_arena>, \n p=<optimized out>, have_lock=0) at malloc.c:4073\n#2 0x00007ffff787498c in __GI___libc_free (mem=<optimized out>)\n at malloc.c:2966\n#3 0x00007fffd8ff6e2e in THFloatStorage_free ()\n from /home/zafar/pytorch/lib/python3.5/site-packages/torch/lib/libTH.so.1\n#4 0x00007fffd900f564 in THFloatTensor_free ()\n from /home/zafar/pytorch/lib/python3.5/site-packages/torch/lib/libTH.so.1\n#5 0x00007fffdf58261d in THPFloatTensor_dealloc (self=0x7fffaf076a88)\n at /data/users/soumith/builder/wheel/pytorch-src/torch/csrc/generic/Tensor.cpp:70\n#6 0x000000000055d9ba in ?? ()\n#7 0x00007fffdf8d9309 in THPPointer<_object>::~THPPointer (this=0x3d06220, \n __in_chrg=<optimized out>)\n at /data/users/soumith/builder/wheel/pytorch-src/torch/csrc/utils/object_ptr.h:12\n#8 std::_Head_base<0ul, THPPointer<_object>, false>::~_Head_base (\n this=0x3d06220, __in_chrg=<optimized out>)\n at /data/users/soumith/miniconda2/envs/py35k/gcc/include/c++/tuple:129\n#9 std::_Tuple_impl<0ul, THPPointer<_object>, int, std::unique_ptr<torch::autograd::VariableVersion, std::default_delete<torch::autograd::VariableVersion> > >:---Type <return> to continue, or q <return> to quit---q\nQuit\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "When I break this line into two lines: <SCODE>input_normalized = (input - sample_mean.expand_as(input))\ninput_normalized = input_normalized/sample_var.expand_as(input)\n<ECODE> things are working fine (at least I hope so). And this behaviour is intriguing.",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "Whoa, that doesn’t make very much sense. I’ll see if I can run your code today.",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Zafarullah_Mahmood"
},
{
"contents": "Is there a version for batch renormalisation 2d?",
"isAccepted": false,
"likes": 3,
"poster": "Hengck"
}
] | false |
How to use pack_padded_sequence in seq2seq models | null | [
{
"contents": "Then when I sort sequences by length I get: So, h(hidden state) and c(cell state) from encoder will be incorrect for corresponding sentences in the decoder in the batch. How can I solve it? Or I shouldn’t use pack_padded_sequence? Thanks!",
"isAccepted": false,
"likes": 2,
"poster": "VladislavPrh"
},
{
"contents": "You should keep track of the order somehow. It’s pretty common for seq2seq models to use some kind of attentional input feeding in the decoder which prevents nn.LSTM from being used for all decoder timesteps in one call; in that case I sort the batch based only on source sentence length and use an unrolled LSTM cell in the decoder. Code that sorts the batch together according to the source sentence length will be in torchtext shortly.",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "VladislavPrh"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "vijendra_rana"
},
{
"contents": "After attempting to implement seq2seq with packed_sequences, I have come to the conclusion that this is not really feasable as, It appears to me that fundamentally, packed sequences require a lot of fenangling to work as currently implemented. It feels like they could benefit from a little more abstraction to make things a bit easier for the programmer. Happy to be corrected/proven wrong here!",
"isAccepted": false,
"likes": null,
"poster": "DuaneNielsen"
}
] | false |
What is action.reinforce(r) doing actually? | reinforcement-learning | [
{
"contents": "Hi, Thanks.",
"isAccepted": false,
"likes": 12,
"poster": "yunjey"
},
{
"contents": "",
"isAccepted": false,
"likes": 13,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "yunjey"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "zuoxingdong"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "apaszke"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "wjaskowski"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "wjaskowski"
},
{
"contents": "I think I see what you’re asking now. The gradient estimates from all the examples in the batch are added together in the stochastic node (there’s no implicit division by the batch size) so, depending on your use case, you may want to manually divide your rewards by the batch size.",
"isAccepted": false,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "abhigenie92"
},
{
"contents": "<SCODE>x = Variable(torch.Tensor([0.1, 0.9]), requires_grad=True)\ny = torch.multinomial(x, 1) # here, make sure y.data[0] = 1\nr = torch.Tensor([2]).float()\ny.inforce(r)\ny.backward()\nprint x.grad.data\n<ECODE>",
"isAccepted": false,
"likes": 1,
"poster": "Crazyai"
},
{
"contents": "I implemented Vanilla Policy Gradient in this way and it works. However using the .reinforce method seems cleaner",
"isAccepted": false,
"likes": null,
"poster": "volpato30"
},
{
"contents": "To make it simple, let’s say 1D, given a mean and std, we have a sample = mean + std*eps, where eps ~ N(0, 1). It is not very clear to me why it is like that. Since if we have a sample, according to the formula, d_sample/d_mean = 1. So, grad_mean = upward_gradient * d_sample/d_mean",
"isAccepted": false,
"likes": null,
"poster": "zuoxingdong"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "lucasb-eyer"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "norm"
},
{
"contents": "I do not notice the paper you mentioned, can you give a PDF file or the link. Thanks.",
"isAccepted": false,
"likes": null,
"poster": "zeng"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "lucasb-eyer"
},
{
"contents": "<SCODE>\"\"\"\nTry some reinforce by hand and similar\n\"\"\"\n\nimport torch\nfrom torch import autograd, nn, optim\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy as np\n\n\ndef run_model(x, h1):\n torch.manual_seed(123)\n params = list(h1.parameters())\n\n x1 = h1(Variable(x))\n x2 = F.softmax(x1)\n a = torch.multinomial(x2)\n print('a', a)\n return x1, x2, a\n\n\ndef run_by_hand(params, x, h1):\n print('')\n print('=======')\n print('by hand')\n h1.zero_grad()\n x1, x2, a = run_model(x, h1)\n g = torch.gather(x2, 1, Variable(a.data))\n log_g = g.log()\n log_g.backward(- torch.ones(4, 1))\n print('params.grad', params.grad)\n\n\ndef run_pytorch_reinforce(params, x, h1):\n print('')\n print('=======')\n print('pytorch reinforce')\n h1.zero_grad()\n x1, x2, a = run_model(x, h1)\n a.reinforce(torch.ones(4, 1))\n autograd.backward([a], [None])\n print('params.grad', params.grad)\n\n\ndef run():\n N = 4\n K = 1\n C = 3\n\n torch.manual_seed(123)\n x = torch.ones(N, K)\n h1 = nn.Linear(K, C, bias=False)\n params = list(h1.parameters())[0]\n run_by_hand(params, x, h1)\n run_pytorch_reinforce(params, x, h1)\n\n\nif __name__ == '__main__':\n run()\n<ECODE> Result: <SCODE>=======\nby hand\na Variable containing:\n 1\n 1\n 0\n 1\n[torch.LongTensor of size 4x1]\n\nparams.grad Variable containing:\n 0.6170\n-1.3288\n 0.7117\n[torch.FloatTensor of size 3x1]\n\n\n=======\npytorch reinforce\na Variable containing:\n 1\n 1\n 0\n 1\n[torch.LongTensor of size 4x1]\n\nparams.grad Variable containing:\n 0.6170\n-1.3288\n 0.7117\n[torch.FloatTensor of size 3x1]\n<ECODE> (Its actually almost the same amount of code in fact?)",
"isAccepted": false,
"likes": null,
"poster": "hughperkins"
}
] | false |
How to simpler downsample an image tensor with bicubic? | vision | [
{
"contents": "Trying to downsample a batch of normalized image tensor but failed to get it work with transforms.Scale or PIL’s resize. Finally get it worked by : Is there a better way to do this?",
"isAccepted": false,
"likes": 1,
"poster": "orashi"
},
{
"contents": "If not, yes this seems appropriate.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
},
{
"contents": "What about nn.AdaptiveAvgPool2d?",
"isAccepted": false,
"likes": null,
"poster": "IanYeung"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "jia_lee"
}
] | false |
Per-Class weights in nn.NLLLoss2d | vision | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ajdroid"
},
{
"contents": "It’s a typo, it does support weights. I’ve fixed the documentation (it will regenerate shortly).",
"isAccepted": false,
"likes": 1,
"poster": "smth"
}
] | false |
Cannot Unsqueeze Empty Tensor | null | [
{
"contents": "When I load my data, I meet a problem: <SCODE>Traceback (most recent call last):\n File \"trainer.py\", line 41, in <module>\n for i, (rmap, label) in enumerate(trainloader):\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 174, in __next__\n return self._process_next_batch(batch)\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 198, in _process_next_batch\n raise batch.exc_type(batch.exc_msg)\nRuntimeError: Traceback (most recent call last):\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 32, in _worker_loop\n samples = collate_fn([dataset[i] for i in batch_indices])\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 81, in default_collate\n return [default_collate(samples) for samples in transposed]\n File \"/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py\", line 68, in default_collate\n return torch.stack(batch, 0)\n File \"/usr/local/lib/python2.7/dist-packages/torch/functional.py\", line 56, in stack\n return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim)\n File \"/usr/local/lib/python2.7/dist-packages/torch/functional.py\", line 56, in <genexpr>\n return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim)\nRuntimeError: cannot unsqueeze empty tensor at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:530\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "ycszen"
},
{
"contents": "you need to give more context",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "I also came across the same issue when trying to build a custom dataset: <SCODE> def __getitem__(self, index):\n fname = self.train_filenames[index]\n image = Image.open(os.path.join(celeba_imgpath, fname))\n label = self.train_labels[index]\n return self.transform(image), torch.FloatTensor(label)\n\n<ECODE> I think it was caused by the label given as a scalar value. I converted the label to a list, and then the problem was solved. Here is the corrected code: <SCODE> def __getitem__(self, index):\n fname = self.train_filenames[index]\n image = Image.open(os.path.join(celeba_imgpath, fname))\n label = [self.train_labels[index]]\n return self.transform(image), torch.FloatTensor(label)\n<ECODE>",
"isAccepted": false,
"likes": 7,
"poster": "vmirly1"
},
{
"contents": "I had the same issue this solution worked for me. Thanks",
"isAccepted": false,
"likes": null,
"poster": "Anshul_Paigwar"
},
{
"contents": "I also had the same issue and is resolved with your solution. This saved a lot of time! Thanks",
"isAccepted": false,
"likes": null,
"poster": "sngth"
}
] | false |
Building a custom loss function in pytorch | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "sarthak1996"
},
{
"contents": "There are other threads showcasing making custom loss functions. Any of those should get you started.",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
GPU slower than CPU on a simple RNN test code | null | [
{
"contents": "Hi, I wanted to write an RNN from scratch using the pytorch cuda capabilities and I ran some preliminary tests to compare the speed of the CPU vs GPU. The task is very simple and consists of a for loop mimicking the update of the internal state x in an RNN with recurrent weight matrix J. I’m using a Quadro K620 with cuda 8.0. When the size of x is N=1000 there seems to be a trade-off, with the GPU implementation consistently getting slower when the number of iterations increases (I ran some other tests with different sizes of the J matrix and this behaviour seems pretty systematic). This is an example of running times I get when running the enclosed script: <SCODE>cpu: [0.010117292404174805, 0.058980703353881836, 0.45785975456237793, 4.512230634689331]\ngpu: [0.0019445419311523438, 0.05474495887756348, 0.7503962516784668, 7.011191129684448]\n<ECODE> I’d really appreciate some help on this. Thanks in advance. The test script is the following: <SCODE>import numpy as np\nimport torch as tr\nimport math\nimport time\n\nGPUID = 0\ntr.cuda.set_device(GPUID)\n\nN = 1000\n\nJ = tr.randn(N,N)\nx = tr.randn(N)\nr = tr.randn(N)\ny = tr.randn(N)\n\nJn = J.numpy()\nxn = x.numpy()\nrn = r.numpy()\nyn = y.numpy()\n\ncputimes = []\nfor sampl in (100, 1000, 10000, 100000):\n start = time.time()\n for i in xrange(sampl):\n rn = np.tanh(xn)\n xn = Jn.dot(xn);\n end = time.time()\n cputimes.append(end-start)\nprint(cputimes)\n\nJc = J.cuda()\nxc = x.cuda()\nrc = r.cuda()\nyc = y.cuda()\n\ngputimes = []\nfor sampl in (100, 1000, 10000, 100000):\n start = time.time()\n for i in xrange(sampl):\n rc = tr.tanh(xc)\n xc = Jc.mv(xc);\n end = time.time()\n gputimes.append(end-start)\nprint(gputimes)\n<ECODE>",
"isAccepted": false,
"likes": 2,
"poster": "aleingrosso"
},
{
"contents": "I experienced the same thing. A very simple network runs faster in CPU than GPU.",
"isAccepted": false,
"likes": null,
"poster": "wasiahmad"
},
{
"contents": "Can anyone help? I experienced the same thing.",
"isAccepted": false,
"likes": null,
"poster": "gaojun4ever"
},
{
"contents": "Its totally normal that rnns run fast on cpus, compared to gpus. they send lots of teeny-tiny kernel launches, and the gpu cores spend all their time waiting for the next batch to arrive…",
"isAccepted": false,
"likes": null,
"poster": "hughperkins"
},
{
"contents": "By the way, you’ll get faster per-example times by increasing the batch size, at the expense that effective learning rate will probably decrease.",
"isAccepted": false,
"likes": null,
"poster": "hughperkins"
}
] | false |
Different Learning Rates within a Model | null | [
{
"contents": "How would I apply a different learning rate to different portions of a model? Would it be as simple as creating two optimizers with different sets of model parameters and calling optimizer.step() on both for each batch?",
"isAccepted": false,
"likes": 3,
"poster": "kellywzhang"
},
{
"contents": "",
"isAccepted": false,
"likes": 6,
"poster": "vabh"
}
] | false |
A Neural Style with pretrained SqueezeNet with Torchvision | vision | [
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "LiZeng"
},
{
"contents": "great stuff man! thanks for sharing.",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
Is it possible to keep intermediate results of forward for backward? | null | [
{
"contents": "Hello, I would like to implement a new torch.autograd.Function where the gradient is closely related to an intermediate result. Is there a way to store the intermediate result for the backward pass to avoid having to compute it again? (Similar to save_for_backward, but that explicitly isn’t it…) Thank you Thomas",
"isAccepted": false,
"likes": 2,
"poster": "tom"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "smth"
},
{
"contents": "Thanks! <SCODE>RuntimeError: save_for_backward can only save input or output tensors, but argument 2 doesn't satisfy this condition\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "The autograd system is designed to handle intermediate results, but only if they were formulated using torch.autograd.Variable() and torch tensor functions, afaik. The error is because one of the arguments is not a Variable.",
"isAccepted": false,
"likes": null,
"poster": "mxh000"
},
{
"contents": "The question is bit outdated, I encountered the same situation <SCODE>def forward(self, a, b, c):\n # okay\n # self.save_for_backward(a) \n \n # wrong!!\n self.save_for_backward(a+1)\n<ECODE> RuntimeError: save_for_backward can only save input or output tensors, but argument 2 doesn’t satisfy this condition",
"isAccepted": false,
"likes": null,
"poster": "lenscloth"
},
{
"contents": "Best regards Thomas",
"isAccepted": false,
"likes": 2,
"poster": "tom"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "SivanK"
},
{
"contents": "I have the same question (I need to store info for backward). Unfortunately, I don’t see where this requirement (store only input or output) comes from and how do I get around it (storing as a member in Function means you can use this function only once). Any ideas how to store intermediate things?",
"isAccepted": false,
"likes": null,
"poster": "arogozhnikov"
},
{
"contents": "I see two main options. Best regards Thomas <SCODE>class MyReLU(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input):\n ctx.save_for_backward(input)\n return input.clamp(min=0)\n\n @staticmethod\n def backward(ctx, grad_output):\n input, = ctx.saved_variables\n grad_input = grad_output.clone()\n grad_input[input < 0] = 0\n return grad_input\n\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "Ok, probably it is a way to go, where do I find an example with instantiating function? Values that I need to store are actually integer numbers that occur during forward pass, so I can’t follow this recipe.",
"isAccepted": false,
"likes": null,
"poster": "arogozhnikov"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "arogozhnikov"
}
] | false |
GPU memory not returned | null | [
{
"contents": "<SCODE>class seg_GAN(object):\n def __init__(self, batch_size=10, height=512,width=512,channels=3, wd=0.0005,nfilters_d=64, checkpoint_dir=None, path_imgs=None, learning_rate=2e-8,lr_step=30000,lam_fcn=1, lam_adv=1,adversarial=False,nclasses=5):\n\n self.adversarial=adversarial\n self.channels=channels\n self.lam_fcn=lam_fcn\n self.lam_adv=lam_adv\n self.lr_step=lr_step\n self.wd=wd\n self.learning_rate=learning_rate\n self.batch_size=batch_size \n self.height=height\n self.width=width\n self.checkpoint_dir = checkpoint_dir\n self.path_imgs=path_imgs\n self.nfilters_d=nfilters_d\n self.organ_target=1#1 eso 2 heart 3 trach 4 aorta\n self.nclasses=nclasses\n self.netG=UNet(self.nclasses,self.channels)\n self.netG.apply(weights_init)\n\tif self.adversarial:\n\t self.netD=Discriminator(self.nclasses,self.nfilters_d,self.height,self.width)\n self.netD.apply(weights_init)\n\n self.dst = stanfordDataSet(self.path_imgs, is_transform=True)\n self.trainloader = data.DataLoader(self.dst, batch_size=self.batch_size, shuffle=True, num_workers=2)\n\n def train(self,config):\n print 'verion ',torch.__version__\n\n start=0#TODO change this so that it can continue when loading a model\n print(\"Start from:\", start)\n\n label_ones=torch.ones(self.batch_size)\n label_zeros=torch.zeros(self.batch_size)\n y_onehot = torch.FloatTensor(self.batch_size,self.nclasses,self.height, self.width) \n\n #print 'shape y_onehot ',y_onehot.size()\n if self.adversarial:\n self.netD.cuda()\n self.netG.cuda()\n label_ones,label_zeros,y_onehot=label_ones.cuda(),label_zeros.cuda(),y_onehot.cuda()\n \n y_onehot_var= Variable(y_onehot)\n label_ones_var = Variable(label_ones)\n label_zeros_var = Variable(label_zeros)\n if self.adversarial:\n optimizerD = optim.Adam(self.netD.parameters(), lr = self.learning_rate, betas = (0.5, 0.999))\n optimizerG = optim.Adam(self.netG.parameters(), lr = self.learning_rate, betas = (0.5, 0.999))\n\n for it in range(start,config.iterations):#epochs\n for i, (images,GT) in enumerate(self.trainloader): \n \n y_onehot.resize_(GT.size(0),self.nclasses,self.height, self.width)\n y_onehot.zero_()\n label_ones.resize_(GT.size(0))\n label_zeros.resize_(GT.size(0)) \n\n images = Variable(images.cuda()) \n #images = Variable(images)\n #print 'unique ',np.unique(GT.numpy())\n GT=GT.cuda()\n \n #print 'image size ',images.size()\n #print 'GT size ',GT.size()\n #print 'shape y_onehot ',y_onehot.size() \n y_onehot.scatter_(1,GT.view(GT.size(0),1,GT.size(1),GT.size(2)),1)#we need to add singleton dim so thatnum of dims is equal\n \n\n #GT=Variable(GT.cuda())#N,H,W\n GT=Variable(GT)#N,H,W\n if self.adversarial:\n\n ##########################\n #Update Discriminator\n ##########################\n #train with real samples\n self.netD.zero_grad()\n #print self.netD\n output=self.netD(y_onehot_var)#this must be in one hot\n errD_real =F.binary_cross_entropy(output,label_ones_var)#loss_D\n errD_real.backward()#update grads of netD\n \n\n # train with fake\n fake = self.netG(images)#this is a prob map which we want to be similar to y_onehot\n #print 'fake sz',fake.size()\n output = self.netD(fake.detach())#only for speed, so grads of netg are not computed\n errD_fake = F.binary_cross_entropy(output, label_zeros_var)\n \n errD_fake.backward()\n\n optimizerD.step()#update the parameters of netD\n\n ############################\n # Update G network\n ###########################\n self.netG.zero_grad()\n if self.adversarial:\n output_D=self.netD(fake)\n output_G, GT,label_ones,output_D\n errG = self.loss_G(fake,GT, label_ones_var,output_D)#here we should use ones with the fakes\n else:\n fake = self.netG(images)\n errG = self.loss_G(fake,GT)\n\n \n errG.backward()#backprop errors\n optimizerG.step()#optimize only netG params\n\n if i%10==0:\n print 'epoch ',it\n print 'iteration ',i\n if self.adversarial:\n print 'error real ',errD_real.data \n print 'error fake ',errD_fake.data\n print 'error Generator ',errG.data\n\n if it%5==0:\n print 'testing ...'\n name_img='0000047'\n meanval=self.dst.mean_rgb\n img_test_name=os.path.join(self.path_imgs,'iccv09Data',\"images_resized\",name_img+'.png')\n lab_test_name=os.path.join(self.path_imgs,'iccv09Data',\"labels_resized\",name_img+'_label.png')\n img = Image.open(img_test_name)\n img = np.array(img, dtype=np.uint8)\n label = Image.open(lab_test_name)\n label = np.array(label, dtype=np.uint8)\n img = img.astype(np.float32)\n img -= meanval\n img = img.transpose(2, 0, 1)\n\n img_ = torch.from_numpy(img)#.float()\n img_=img_.cuda()\n img_var=Variable(img_)\n prob_map = self.netG(img_var.view(1,img_.size(0),img_.size(1),img_.size(2)))\n prob_np=prob_map.data.cpu().numpy()\n prob_np=np.squeeze(prob_np)\n label_out=np.argmax(prob_np,0)\n print 'unique labout ',np.unique(label_out)\n print 'probmap ',prob_map.size()\n lab_visual=label2color(label_out)\n imsave('out_color.png',lab_visual)\n for idlabel in np.unique(label):\n diceratio=dice(label, label_out,idlabel)\n print 'dice id {} '.format(idlabel),diceratio\n\n\n def loss_G(self,output_G, GT,label_ones=None,output_D=None):\n fcnterm=CrossEntropy2d(output_G,GT)\n if self.adversarial:\n bceterm=F.binary_cross_entropy(output_D,label_ones)\n return fcnterm+self.lam_adv*bceterm\n else:\n return fcnterm\n<ECODE> I tested on 2 different machinesand the same problem is happening. What am I doing wrong? Thanks!!",
"isAccepted": false,
"likes": null,
"poster": "rogetrullo"
},
{
"contents": "<SCODE>ps aux|grep <username>|grep python|awk '{print $2}'|xargs kill\n<ECODE>",
"isAccepted": false,
"likes": 5,
"poster": "chenyuntc"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "rogetrullo"
},
{
"contents": "<SCODE> ps x |grep python|awk '{print $1}'|xargs kill\n<ECODE> <SCODE> ps x |grep ipykernel|awk '{print $1}'|xargs kill\n<ECODE> there should be an elegant way to kill the related process using only ps+awk",
"isAccepted": false,
"likes": 7,
"poster": "chenyuntc"
},
{
"contents": "Thanks! now I have the memory back.",
"isAccepted": false,
"likes": null,
"poster": "rogetrullo"
},
{
"contents": "What is the most principled way to quit a job when it is running dataloading? For example, control+c can easily cause such hanging issue (which can lead to a zombie process and the only way to get around is rebooting the machine).",
"isAccepted": false,
"likes": null,
"poster": "WendyShang"
},
{
"contents": "on python3.5+ i think this is not an issue. python 2.7 and multiprocessing seem to have some zombie issues.",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "muzi-8"
},
{
"contents": "ps x |grep ipykernel|awk '{print $1}'|xargs kill solved it. Thanks for that!",
"isAccepted": false,
"likes": null,
"poster": "SeperateReality"
},
{
"contents": "Hi, if I run 3 processes, e.g. PID = 3, 4, 5 respectively. thank you!",
"isAccepted": false,
"likes": null,
"poster": "Alpha"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Oktai15"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "igreen"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "albanD"
},
{
"contents": "can you give the complete command? I try this: ps x |grep python|awk ‘{print $1}’|xargs kill -9 pid. All python process still stop.",
"isAccepted": false,
"likes": null,
"poster": "igreen"
},
{
"contents": "I faced the same problem on jupyter lab and solved it by shutting down all sessions.",
"isAccepted": false,
"likes": null,
"poster": "DiracLee"
}
] | false |
Erf and erfinv functions in PyTorch | null | [
{
"contents": "I have a torch.FloatTensor say A of size u,v,x,y. I have to calculate its erf and erfinv. Is there any way to do this in PyTorch. Please help me.",
"isAccepted": false,
"likes": 1,
"poster": "Soniya"
},
{
"contents": "What is Erf (and what is it’s inverse)? Is this an abbreviation?",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Soniya"
},
{
"contents": "I’m actually interested by this as well, could you give us some updates if you find a solution?",
"isAccepted": false,
"likes": null,
"poster": "hadware"
},
{
"contents": "<SCODE>import torch\nfrom scipy import special\n## reference - https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.special.erf.html\n\nu = 1\nv = 2\nx = 3\ny = 4\n\nA = torch.randn(u,v,x,y)\n\nspecial.erf( A.numpy() )\nspecial.erfinv( A.numpy())\n<ECODE>",
"isAccepted": false,
"likes": 5,
"poster": "AjayTalati"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Soniya"
},
{
"contents": "Nope, it does not work, because I also need the error function but as an autograd function; I mean, I would like to insert this function in the computation graph and pytorch to automatically differentiate it ! If the function is not supported for now in pytorch, may I define my own custom autograd function ?",
"isAccepted": false,
"likes": 1,
"poster": "cerisara"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "AjayTalati"
},
{
"contents": "<SCODE>import torch\na_for_erf = 8.0/(3.0*numpy.pi)*(numpy.pi-3.0)/(4.0-numpy.pi)\ndef erf_approx(x):\n return torch.sign(x)*torch.sqrt(1-torch.exp(-x*x*(4/numpy.pi+a_for_erf*x*x)/(1+a_for_erf*x*x)))\ndef erfinv_approx(x):\n b = -2/(numpy.pi*a_for_erf)-torch.log(1-x*x)/2\n return torch.sign(x)*torch.sqrt(b+torch.sqrt(b*b-torch.log(1-x*x)/a_for_erf))<ECODE> To get an impression of how they look, you could plot it against the scipy.special functions like <SCODE>from matplotlib import pyplot\n%matplotlib inline\nimport scipy.special\nx = numpy.linspace(-2,2,100)\npyplot.subplot(1,2,1)\npyplot.title('erf')\npyplot.plot(x,erf_approx(torch.from_numpy(x)).numpy(), label=\"approx\")\npyplot.plot(x,scipy.special.erf(x),'--', label=\"scipy\")\npyplot.legend()\npyplot.subplot(1,2,2)\ny = scipy.special.erf(x)\npyplot.title('erfinv')\npyplot.plot(y,erfinv_approx(torch.from_numpy(y)).numpy(), label=\"approx\")\npyplot.plot(y,scipy.special.erfinv(y),'--', label=\"scipy\")\npyplot.legend()<ECODE> (the %matplotlib is for jupyter). Best regards Thomas",
"isAccepted": false,
"likes": 5,
"poster": "tom"
},
{
"contents": "nice work !!! I found a very simple implementation of the sinkhorn-knopp matrix normalisation, we were looking at a while ago. It’s in MATLAB, but it’s very understandable, All the best, Aj",
"isAccepted": false,
"likes": 1,
"poster": "AjayTalati"
},
{
"contents": "Great ! Thank you, very useful !",
"isAccepted": false,
"likes": null,
"poster": "cerisara"
},
{
"contents": "One use case that I needed was to sample from a Gaussian VAE and do interpolation in uniform distributed variables. That is because u = erfinv(v) follows normal distribution if v follows uniform distribution via a technique called inverse sampling.",
"isAccepted": false,
"likes": null,
"poster": "zhangxiangxiao"
},
{
"contents": "",
"isAccepted": false,
"likes": 3,
"poster": "ira"
},
{
"contents": "Awesome, thank you! Best regards Thomas",
"isAccepted": false,
"likes": 1,
"poster": "tom"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "MAsad"
}
] | false |
PyTorch vs Theano GPU RAM usage issue | null | [
{
"contents": "Hello, I’ve noticed that PyTorch (torch (0.1.10.post2), torchvision (0.1.7)) uses significantly more GPU RAM than e.g. Theano running a similar code. Unfortunately I cannot disclose the actual code but I think it should be possible to reproduce this behavior with the following simple sequential architecture (all dimensions are in form MINIBATCH_LENGTH x NUM_CHANNELS x H x W): N - minibatch size, e.g. 20, padding mode == “same” everywhere Main loop: <SCODE>net = Net()\nnet.cuda()\nnet.train()\n\noptimizer = optim.SGD(net.parameters(), lr = ...)\ncriterion = nn.MSELoss()\n\ninput = Variable(torch.from_numpy(<your-Nx10x1024x1024-tensor>).cuda())\ntarget = Variable(torch.from_numpy(<your-Nx10x1024x1024-tensor>).cuda())\n\nfor epoch in xrange(...):\n\n output = net(input)\n loss = criterion(output, target)\n \n net.zero_grad()\n loss.backward()\n optimizer.step()\n<ECODE> On a GTX-1060 corresponding code takes around 30% more GPU RAM than its Theano counterpart (exection times are about the same for Theano and PyTorch versions). Is this something that can be fixed? Thanks,",
"isAccepted": false,
"likes": null,
"poster": "vladimir"
},
{
"contents": "This might be one reason to explain the memory difference.",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "thanks for your reply. This seems to be the case indeed (the maximum mini-batch size seems to be quite similar for both Theano and PyTorch). However, is there a way to constrain memory allocation? In Theano there’s lib.cnmem which can be assigned % of memory to pre-allocate. It’s a soft limit but still helpful when sharing GPU between multiple jobs.",
"isAccepted": false,
"likes": null,
"poster": "vladimir"
},
{
"contents": "there is no way to constrain the memory to an upper bound right now.",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
Parameter / Weight sharing | null | [
{
"contents": "I’m a little lost on how it would be possible to perform weight sharing in pytorch. In my case I would like to do the following: Essentially I would like to reuse weights q.data.weights and weights w2 concatenated together in a loop. q.data.weights are the weights for conv2d layer q as well as being used in the loop, while weights w2 would ONLY be used in the conv2d layer in the loop (however, since it would be repeated n times, technically weights w2 would be shared as well). <SCODE>class Model(nn.Module):\n\n def __init__(self, config):\n super(Model, self).__init__()\n self.config = config\n self.w2 = Variable(torch.randn(10, 1, 3, 3), requires_grad=True)\n self.h = nn.Conv2d(in_channels=2, \n out_channels=150, \n kernel_size=(3, 3), \n stride=1, padding=1) \n self.r = nn.Conv2d(in_channels=150, \n out_channels=1, \n kernel_size=(1, 1), \n stride=1, padding=1)\n self.q = nn.Conv2d(in_channels=1, \n out_channels=10, \n kernel_size=(3, 3), \n stride=1, padding=1) \n\n def forward(self, x):\n h = self.h(x)\n r = self.r(h)\n q = self.q(r)\n v = torch.max(q, dim=1)[0]\n for i in range(n):\n rv = torch.cat([r, v], 1)\n Conv2d(rv, weights=[self.q.weight, self.w2]) # This is obviously not valid torch code\n<ECODE> In Theano it could be accomplished as follows: <SCODE># Helper function\ndef conv2D_keep_shape(x, w, subsample=(1, 1)):\n # crop output to same size as input\n fs = T.shape(w)[2] - 1 # this is the filter size minus 1\n ims = T.shape(x)[2] # this is the image size\n return theano.sandbox.cuda.dnn.dnn_conv(img=x,\n kerns=w,\n border_mode='full',\n subsample=subsample,\n )[:, :, fs/2:ims+fs/2, fs/2:ims+fs/2]\n\n# Weights\nself.w = theano.shared((np.random.randn(1, 150, 1, 1)).astype(theano.config.floatX))\nself.w1 = theano.shared((np.random.randn(10, 1, 3, 3)).astype(theano.config.floatX))\nself.w2 = theano.shared((np.random.randn(10, 1, 3, 3)).astype(theano.config.floatX))\n\n# Model\nself.r = conv2D_keep_shape(input, self.w)\n\nself.q = conv2D_keep_shape(self.r, self.w1)\n\nself.v = T.max(self.q, axis=1, keepdims=True)\n\nfor i in range(n):\n self.q = conv2D_keep_shape(T.concatenate([self.r, self.v], axis=1), T.concatenate([self.w1, self.w2], axis=1))\n<ECODE> Is this possible to achieve in pytorch? Please also let me know if you need more explination in case the above is not clear.",
"isAccepted": false,
"likes": 1,
"poster": "kentsommer"
},
{
"contents": "Just thought of a possible workaround although its fairly hacky and I’m hoping there is a cleaner way to do it. Not super jazzed about needing to modify the pytorch source unless its really necessary. So hopefully someone with more than my admittedly short (one day) experience with pytorch will have a better solution.",
"isAccepted": false,
"likes": null,
"poster": "kentsommer"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "JerryLin"
},
{
"contents": "So would this then be the correct way to do this? (is dropping a functional component into the forward() function valid?) <SCODE>\n def forward(self, x):\n h = self.h(x)\n r = self.r(h)\n q = self.q(r)\n v = torch.max(q, dim=1)[0]\n for i in range(n):\n q = F.conv2d(torch.cat([r, v], 1), \n torch.cat([self.q.weight, self.w_fb], 1), \n padding=1)\n v = torch.max(q, dim=1)[0]\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "kentsommer"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "Hi, Would that be enough to share gradWeights and gradBias? Thanks.",
"isAccepted": false,
"likes": 1,
"poster": "Prasanna1991"
},
{
"contents": "<SCODE>for i in range(n):\nq=q(torch.cat([r,v]))\n\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "Baichuan"
}
] | false |
4D Tensor indexing with `gather()` | null | [
{
"contents": "Currently I would like to perform this numpy equivalent command <SCODE>a = np.random.randn(5, 4, 3, 3)\na[range(5), :, [0, 1, 1, 2, 0], [1, 2, 0, 1, 0]]\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "zuoxingdong"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "fmassa"
}
] | false |
Generic question about batch sizes | null | [
{
"contents": "I was playing around with the MNIST example, and noticed the doing TOO big batches (10,000 images per batch) seem to be hurting accuracy scores. This got me thinking : What are the general rules of thumb you guys have found when it comes to batch sizes? Also, what is your favorite optimizer and activation function? Thanks!",
"isAccepted": false,
"likes": null,
"poster": "FuriouslyCurious"
},
{
"contents": "The batch size is usually set between 64 and 256.",
"isAccepted": false,
"likes": 6,
"poster": "vabh"
},
{
"contents": "Thank you for the papers! I was trying to minimize noise through massive batches. The papers you provided are a great explanation of why massive batches may not work. Thanks!",
"isAccepted": false,
"likes": null,
"poster": "FuriouslyCurious"
}
] | false |
LogLoss on Kaggle | null | [
{
"contents": "<SCODE>LogLoss= −1/n∑i=1 to n [yi log(ŷ i) + (1−yi) log(1−ŷ i)],\n\nwhere:\n n : is the number of patients in the test set\n ŷi : is the predicted probability of the image belonging to a patient with cancer\n yi : is 1 if the diagnosis is cancer, 0 otherwise\n<ECODE> <SCODE>torch.nn MLSM Loss:\nloss(x, y) = - sum_i (y[i] log( exp(x[i]) / (1 + exp(x[i])))\n + (1-y[i]) log(1/(1+exp(x[i])))) / x:nElement()\n<ECODE> Thanks!",
"isAccepted": false,
"likes": 2,
"poster": "FuriouslyCurious"
},
{
"contents": "Hi Torchies, Bumping an old question… if someone has a paper / presentation comparing different loss functions and their effects, it will really help me out. Thanks!",
"isAccepted": false,
"likes": null,
"poster": "FuriouslyCurious"
}
] | false |
Finished cifar10 and have some questions about mechanics and data loading | null | [
{
"contents": "Hi, As always, as part of my first post I thank the developers for this amazing library that helps a lot of us in our deep learning escapades. I’ve finished running through the first tutorial involving the CIFAR10 dataset and have some questions. In this code block, Could some explain in detail on what is going on here. These might be more of python related questions than a pytorch question but I think its crucial to understand what is happening here. I understand whats happening in an abstract level but not on the code level. In particular, In the prediction code block, This code (net(images)) is similar to the training stage, so I’m not sure how we are “testing” because we don’t have testing mode. For example, in Keras for training we use model.fit and testing we use model.evaluate, and I’m not seeing a similar distinction here. I apologize for a whole lot of questions, most of them born out of ignorance and I’m sure I’ll have more as I start using pytorch for my problems. If I need to split them up into separate posts, please let me know and I’ll edit the post accordingly. Thanks and I appreciate everyone’s help!",
"isAccepted": false,
"likes": 3,
"poster": "shaun"
},
{
"contents": "No, these are great questions. They’re all surrounded by double underscores (so they’re called “dunder” methods): \n__init__ is one of them, which defines the constructor; \n__str__ is another one – what you implement there defines what Python will do if you call str(obj). The __call__ dunder method defines what Python will do if you call\nan instance of the class as if it were a function. Others questions:",
"isAccepted": false,
"likes": 3,
"poster": "jekbradbury"
},
{
"contents": "@jekbradbury Thank you so much for your replies, I appreciate it. It cleared up a lot of stuff. I still have some questions maybe other people can pop in to the conversation. The following code is how we load the CIFAR10 dataset. For test, we just set train=False My initial intuition was just to set train_loader = train_loader[:small_number] but I got an error: Then I thought I could mess with the train_set directly but I got another error: Both these objects have a len function: So I’m not sure how to get a validation set out of this. So where is this flag being set and how do we know its not training again? I can think of two reasons on this works, but I’m not sure which one: Training block: Test block: During testing (aka prediction) we don’t compute the loss, run the backward, and run the optimization.step() which would mean we are just getting the class labels. So by omitting those steps we do the prediction? This makes sense to me after thinking about it, but it would be helpful if I could get confirmation that this is in fact what is happening. Thanks.",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "Train/test mode is something like this: <SCODE>net.train()\n#train loop/function\nfor (images, labels) in train_loader:\n # train code\n\n\nnet.eval()\n#test loop or function\nfor (images, labels) in test_loader\n# test code eg. outputs = net(images)\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "vabh"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "The network in the example you linked does not have these layers which is why I suspect they did not call the eval function. Calling the train and eval functions won’t affect the model output. For models which have dropout/batchnorm layers, its quite imperative that you call the eval function before training.",
"isAccepted": false,
"likes": null,
"poster": "vabh"
},
{
"contents": "If the dataset is reasonably simple, you can split the dataset like so: Best regards Thomas",
"isAccepted": false,
"likes": 1,
"poster": "tom"
},
{
"contents": "@tom",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "the idea of using a validation set is that whatever you plan do to the test dataset would work for the validation set as well (really, you would use the test set’s DataLoader for the val_dataset). As such, my suggestion would be to feed it to a dataloader that works similar to the test one (for the MNIST example, in fact, you could make test_loader a parameter to the test function and feed the val_loader. That would also do the model.eval() call mentioned earlier). Personally, I would prefer to keep the workflow with Dataset->DataLoader->Validation as that is scalable and looks like an efficient use of my time, but you should certainly just do whatever works for you. Hope this helps, even if it’s just my very limited take on something that ultimately boils down to style preferences and I cannot vouch for my expertise in that. Best regards Thomas",
"isAccepted": false,
"likes": 1,
"poster": "tom"
},
{
"contents": "this works well, i believe something like this should be part of pytorch, at least as an example",
"isAccepted": false,
"likes": null,
"poster": "deepcode"
},
{
"contents": "Might be a bit late to the party, but to create a train/val/test split (of 45k/5k/10k) in CIFAR10, this should work : <SCODE>n_train = 45000\ntrainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)\n\n# create a validation set\nvalset = deepcopy(trainset)\ntrainset.train_data = trainset.train_data[:n_train]\ntrainset.train_labels = trainset.train_labels[:n_train]\n\nvalset.train_data = valset.train_data[n_train:]\nvalset.train_labels = valset.train_labels[n_train:]\n\n# create a test set\ntestset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)\n\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2)\nvalloader = torch.utils.data.DataLoader(valset, batch_size=128, shuffle=True, num_workers=2)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)<ECODE>",
"isAccepted": false,
"likes": 2,
"poster": "superhans"
}
] | false |
New tutorials page | Site Feedback | [
{
"contents": "Hi, We’ve replaced the old notebook based tutorials to better looking sphinx-gallery based tutorials. You can still download notebooks for individual tutorials. Do you have any feedback?",
"isAccepted": false,
"likes": 5,
"poster": "chsasank"
},
{
"contents": "Nice work ! This work does really matter for the community, thanks !",
"isAccepted": false,
"likes": null,
"poster": "trypag"
},
{
"contents": "Thanks for the reply. Since I broke down the long and monolithic notebook tutorials into pieces and sections, I was of the impression that they are easier to read. Can you please illustrate your point with an example/screenshot?",
"isAccepted": false,
"likes": null,
"poster": "chsasank"
},
{
"contents": "As I said, what I liked in ipython, was that each step was contained in a single cell, then moving to another one meant changing of task. The whole page doesn’t seem as fluid to read as before. I don’t really know how to solve this, maybe it’s okay not having cells anymore and we can move to another design, but I don’t feel really comfortable with it right now. To me it requires thinking of how each step in the tutorial is designed, so that we can follow much easier. A first step may be to join a code panel and its associated result panel, without space between.",
"isAccepted": false,
"likes": null,
"poster": "trypag"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "chsasank"
},
{
"contents": "@chsasank I really like the new tutorials page. Its easily accessible compared to the old one and is part of the documentation as well. Will you be adding more tutorials that others have written and available on github? Also, it’ll be nice to have a tutorial on using torchtext.",
"isAccepted": false,
"likes": 2,
"poster": "shaun"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "chsasank"
},
{
"contents": "@chsasank I’ll try to do that. Meanwhile here are my stars on Github, I think some of them are already on the tutorial: Awesome PyTorch 13 The Incredible Pytorch 9 Pytorch Tutorial 10",
"isAccepted": false,
"likes": 1,
"poster": "shaun"
},
{
"contents": "Hi, It is not in the “tutorial” format, but I thought I would share it with the community. If there is another way to share personal projects that might be of help to others please let me know. Thanks!",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "thanks for sharing shaun!",
"isAccepted": false,
"likes": null,
"poster": "smth"
}
] | false |
How can I do the operation the same as `np.where`? | null | [
{
"contents": "",
"isAccepted": true,
"likes": 8,
"poster": "wangg12"
},
{
"contents": "",
"isAccepted": true,
"likes": 1,
"poster": "DiffEverything"
},
{
"contents": "",
"isAccepted": true,
"likes": 1,
"poster": "jekbradbury"
},
{
"contents": "I actually solved it by a workaround: <SCODE># differentiable equivalent of np.where\n# cond could be a FloatTensor with zeros and ones\ndef where(cond, x_1, x_2):\n return (cond * x_1) + ((1-cond) * x_2)\n<ECODE>",
"isAccepted": true,
"likes": 17,
"poster": "jaromiru"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "yxchng"
},
{
"contents": "",
"isAccepted": true,
"likes": 1,
"poster": "truenicoco"
},
{
"contents": "thanks others for examples of other options for 0.3 and before",
"isAccepted": true,
"likes": 5,
"poster": "Pete_Florence"
},
{
"contents": "<SCODE>## this example prunes any elements in the vector above or below the bounds\nvec = torch.FloatTensor([-3.2, 1000.0, 10.0, 639.0])\nlower_bound = 0.0\nupper_bound = 640.0\nlower_bound_vec = torch.ones_like(vec) * lower_bound\nupper_bound_vec = torch.ones_like(vec) * upper_bound\nzeros_vec = torch.zeros_like(vec)\n\ndef where(cond, x_1, x_2):\n cond = cond.float() \n return (cond * x_1) + ((1-cond) * x_2)\n\nvec = where(vec < lower_bound_vec, zeros_vec, vec)\nvec = where(vec > upper_bound_vec, zeros_vec, vec)\n\nin_bound_indices = torch.nonzero(vec).squeeze(1)\nvec = torch.index_select(vec, 0, in_bound_indices)\n<ECODE>",
"isAccepted": true,
"likes": 2,
"poster": "Pete_Florence"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "jpeg729"
},
{
"contents": "",
"isAccepted": true,
"likes": null,
"poster": "Pete_Florence"
}
] | true |
GPU high memory usage | null | [
{
"contents": "<SCODE>class seg_GAN(object):\n def __init__(self, batch_size=10, height=512,width=512,channels=3, wd=0.0005,nfilters_d=64, checkpoint_dir=None, path_imgs=None, learning_rate=2e-8,lr_step=30000,lam_fcn=1, lam_adv=1,adversarial=False,nclasses=5):\n\n self.adversarial=adversarial\n self.channels=channels\n self.lam_fcn=lam_fcn\n self.lam_adv=lam_adv\n self.lr_step=lr_step\n self.wd=wd\n self.learning_rate=learning_rate\n self.batch_size=batch_size \n self.height=height\n self.width=width\n self.checkpoint_dir = checkpoint_dir\n self.path_imgs=path_imgs\n self.nfilters_d=nfilters_d\n self.organ_target=1#1 eso 2 heart 3 trach 4 aorta\n self.nclasses=nclasses\n self.netG=UNet(self.nclasses,self.channels)\n self.netG.apply(weights_init)\n\tif self.adversarial:\n\t self.netD=Discriminator(self.nclasses,self.nfilters_d,self.height,self.width)\n self.netD.apply(weights_init)\n\n self.dst = myDataSet(self.path_imgs, is_transform=True)\n self.trainloader = data.DataLoader(self.dst, batch_size=self.batch_size, shuffle=True, num_workers=2)\n\n def train(self,config):\n print 'verion ',torch.__version__\n\n start=0#TODO change this so that it can continue when loading a model\n print(\"Start from:\", start)\n\n label_ones=torch.ones(self.batch_size)\n label_zeros=torch.zeros(self.batch_size)\n y_onehot = torch.FloatTensor(self.batch_size,self.nclasses,self.height, self.width) \n\n #print 'shape y_onehot ',y_onehot.size()\n if self.adversarial:\n self.netD.cuda()\n self.netG.cuda()\n label_ones,label_zeros,y_onehot=label_ones.cuda(),label_zeros.cuda(),y_onehot.cuda()\n \n y_onehot_var= Variable(y_onehot)\n label_ones_var = Variable(label_ones)\n label_zeros_var = Variable(label_zeros)\n if self.adversarial:\n optimizerD = optim.Adam(self.netD.parameters(), lr = self.learning_rate, betas = (0.5, 0.999))\n optimizerG = optim.Adam(self.netG.parameters(), lr = self.learning_rate, betas = (0.5, 0.999))\n\n for it in range(start,config.iterations):#epochs\n for i, (images,GT) in enumerate(self.trainloader): \n \n y_onehot.resize_(GT.size(0),self.nclasses,self.height, self.width)\n y_onehot.zero_()\n label_ones.resize_(GT.size(0))\n label_zeros.resize_(GT.size(0)) \n\n images = Variable(images.cuda()) \n #images = Variable(images)\n #print 'unique ',np.unique(GT.numpy())\n GT=GT.cuda()\n \n #print 'image size ',images.size()\n #print 'GT size ',GT.size()\n #print 'shape y_onehot ',y_onehot.size() \n y_onehot.scatter_(1,GT.view(GT.size(0),1,GT.size(1),GT.size(2)),1)#we need to add singleton dim so thatnum of dims is equal\n \n GT=Variable(GT)#N,H,W\n if self.adversarial:\n\n ##########################\n #Update Discriminator\n ##########################\n #train with real samples\n self.netD.zero_grad()\n #print self.netD\n output=self.netD(y_onehot_var)#this must be in one hot\n errD_real =F.binary_cross_entropy(output,label_ones_var)#loss_D\n errD_real.backward()#update grads of netD \n\n # train with fake\n fake = self.netG(images)#this is a prob map which we want to be similar to y_onehot\n #print 'fake sz',fake.size()\n output = self.netD(fake.detach())#only for speed, so grads of netg are not computed\n errD_fake = F.binary_cross_entropy(output, label_zeros_var)\n \n errD_fake.backward()\n\n optimizerD.step()#update the parameters of netD\n\n ############################\n # Update G network\n ###########################\n self.netG.zero_grad()\n if self.adversarial:\n output_D=self.netD(fake)\n output_G, GT,label_ones,output_D\n errG = self.loss_G(fake,GT, label_ones_var,output_D)#here we should use ones with the fakes\n else:\n fake = self.netG(images)\n errG = self.loss_G(fake,GT)\n \n errG.backward()#backprop errors\n optimizerG.step()#optimize only netG params\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "rogetrullo"
},
{
"contents": "Does it have anything to do with the fact that PyTorch doesn’t use static buffers, and re-allocates buffers in every pass? Like to get an answer from someone who knows this better.",
"isAccepted": false,
"likes": null,
"poster": "hyqneuron"
},
{
"contents": "Seeing similar issues here with RNNs. Any thoughts?",
"isAccepted": false,
"likes": null,
"poster": "ryanleary"
},
{
"contents": "images = Variable(images.cuda()) by images = Variable(images.cuda(), requires_grad=False)",
"isAccepted": false,
"likes": 1,
"poster": "dpernes"
}
] | false |
How the following two classes interacts? | null | [
{
"contents": "I was following the example of SNLI classifier from pytorch official examples and found the following two classes. <SCODE>class Bottle(nn.Module):\n\n def forward(self, input):\n if len(input.size()) <= 2:\n return super(Bottle, self).forward(input)\n size = input.size()[:2]\n out = super(Bottle, self).forward(input.view(size[0]*size[1], -1))\n return out.view(*size, -1)\n\n\nclass Linear(Bottle, nn.Linear):\n pass\n<ECODE> Can anyone share your insight?",
"isAccepted": false,
"likes": 1,
"poster": "wasiahmad"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "pranav"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "DiffEverything"
},
{
"contents": "",
"isAccepted": false,
"likes": 1,
"poster": "pranav"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "jekbradbury"
},
{
"contents": "I believe its best if we do not have such concepts at all - just simple Python and letting PyTorch/Model take the center stage",
"isAccepted": false,
"likes": 1,
"poster": "pranav"
}
] | false |
Error while using NLLLoss2D | null | [
{
"contents": "<SCODE>class convNet(nn.Module):\n #constructor\n def __init__(self):\n super(convNet, self).__init__()\n #defining layers in convnet\n #input size=1*657*1625\n self.conv1 = nn.Conv2d(1,16, kernel_size=3,stride=1,padding=1)\n self.conv2 = nn.Conv2d(16,32, kernel_size=3,stride=1,padding=1)\n self.conv3 = nn.Conv2d(32,64, kernel_size=3,stride=1,padding=1)\n self.conv4 = nn.Conv2d(64,64, kernel_size=3,stride=1,padding=1)\n self.conv5 = nn.Conv2d(32,16, kernel_size=3,stride=1,padding=1) \n \n #Parallel rectangle and square convolution\n self.Pconv1=nn.Conv2d(64,32, kernel_size=(3,3),stride=1,padding=(1,1))\n self.Pconv2=nn.Conv2d(64,32, kernel_size=(3,7),stride=1,padding=(1,3))\n self.Pconv3=nn.Conv2d(64,32, kernel_size=(7,3),stride=1,padding=(3,1))\n \n #auxilary convolution\n \n self.conv6 = nn.Conv2d(16,8, kernel_size=3,stride=1,padding=1)\n self.conv7 = nn.Conv2d(8,1, kernel_size=3,stride=1,padding=1)\n \n def forward(self, x):\n x = nnFunctions.leaky_relu(self.conv1(x))\n x = nnFunctions.leaky_relu(self.conv2(x))\n x = nnFunctions.leaky_relu(self.conv3(x))\n x = nnFunctions.leaky_relu(self.conv4(x))\n x=nnFunctions.leaky_relu(self.Pconv1(x))+nnFunctions.leaky_relu(self.Pconv2(x))+nnFunctions.leaky_relu(self.Pconv3(x))\n x=nnFunctions.leaky_relu(self.conv5(x))\n x=nnFunctions.leaky_relu(self.conv6(x))\n \n x=nnFunctions.leaky_relu(self.conv7(x))\n return x\n<ECODE> And then I train the network using the following training function: <SCODE>def train(train_loader,net,criterion,epochs,total_samples,learning_rate):\n prev_loss=0\n optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)\n \n for epoch in range(int(epochs)): # loop over the dataset multiple times\n running_loss = 0.0\n for i,data in enumerate(train_loader):\n inputs,labels=data\n # wrap them in Variable\n inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()\n \n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n \n loss = criterion(outputs, labels)\n loss.backward() \n optimizer.step()\n if i==0:\n print loss\n # print statistics\n running_loss += loss.data[0]\n print(running_loss/total_samples)\n print('Finished Training')\n return net\n<ECODE> net=train(train_loader,net,criterion,1,410,0.01) But I get the following error: <SCODE>TypeError Traceback (most recent call last)\n<ipython-input-25-e228ad25a2e4> in <module>()\n----> 1 net=train(train_loader,net,criterion,1,410,0.01)\n\n<ipython-input-23-15ac57a260e6> in train(train_loader, net, criterion, epochs, total_samples, learning_rate)\n 16 outputs = net(inputs)\n 17 \n---> 18 loss = criterion(outputs, labels)\n 19 loss.backward()\n 20 optimizer.step()\n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n--> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/nn/modules/loss.pyc in forward(self, input, target)\n 21 _assert_no_grad(target)\n 22 backend_fn = getattr(self._backend, type(self).__name__)\n---> 23 return backend_fn(self.size_average)(input, target)\n 24 \n 25 \n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.pyc in forward(self, input, target)\n 39 output = input.new(1)\n 40 getattr(self._backend, update_output.name)(self._backend.library_state, input, target,\n---> 41 output, *self.additional_args)\n 42 return output\n 43 \n\nTypeError: CudaSpatialClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, bool, NoneType, torch.cuda.FloatTensor), but expected (int state, torch.cuda.FloatTensor input, torch.cuda.LongTensor target, torch.cuda.FloatTensor output, bool sizeAverage, [torch.cuda.FloatTensor weights or None], torch.cuda.FloatTensor total_weight)\n<ECODE> Please someone help.",
"isAccepted": false,
"likes": null,
"poster": "sarthak1996"
},
{
"contents": "did you try figuring out the error message, it seems somewhat informative. You are giving FloatTensor targets when it expects LongTensor targets",
"isAccepted": false,
"likes": null,
"poster": "smth"
},
{
"contents": "I corrected the above error but now I get another one saying : <SCODE>RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4 at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/THCUNN/generic/SpatialClassNLLCriterion.cu:17\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "sarthak1996"
},
{
"contents": "My model does work with cuda and cpu which should also fix your problem however it does not converge (need to find out why but this is not necessarily relevant for you).",
"isAccepted": false,
"likes": 1,
"poster": "bodokaiser"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Saeed_Izadi"
},
{
"contents": "Thanks",
"isAccepted": false,
"likes": null,
"poster": "achaiah"
},
{
"contents": "Maybe it is because the type of label is wrong, I’ve met with this problems 5min ago.",
"isAccepted": false,
"likes": null,
"poster": "Garfield-Finch"
}
] | false |
Tensor.sub works, Variable.sub fails with “inconsistent tensor size” | null | [
{
"contents": "<SCODE>import torch\nfrom torch.autograd import Variable\nx = torch.rand(1, 2, 3, 4)\n# This succeeds.\nx.sub(x.max()) \nv = Variable(x, volatile=True)\n# This fails with \"RuntimeError: inconsistent tensor size at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:827\"\nv.sub(v.max())<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "ndronen"
},
{
"contents": "the output of v.max() is Variable with size 1, while x.max() is a scalar",
"isAccepted": false,
"likes": null,
"poster": "xwgeng"
},
{
"contents": "That begs the question whether that is the correct behavior. Is it?",
"isAccepted": false,
"likes": null,
"poster": "ndronen"
},
{
"contents": "",
"isAccepted": false,
"likes": 2,
"poster": "smth"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ndronen"
}
] | false |
NaN values obtained in loss.data[0] | null | [
{
"contents": "<SCODE>class convNet(nn.Module):\n #constructor\n def __init__(self):\n super(convNet, self).__init__()\n #defining layers in convnet\n #input size=1*657*1625\n self.conv1 = nn.Conv2d(1,16, kernel_size=3,stride=1,padding=1)\n self.conv2 = nn.Conv2d(16,32, kernel_size=3,stride=1,padding=1)\n self.conv3 = nn.Conv2d(32,64, kernel_size=3,stride=1,padding=1)\n \n #Parallel rectangle and square convolution\n self.Pconv1=nn.Conv2d(64,32, kernel_size=(3,3),stride=1,padding=(1,1))\n self.Pconv2=nn.Conv2d(64,32, kernel_size=(3,7),stride=1,padding=(1,3))\n self.Pconv3=nn.Conv2d(64,32, kernel_size=(7,3),stride=1,padding=(3,1))\n \n #auxilary convolution\n \n self.conv6 = nn.Conv2d(32,8, kernel_size=3,stride=1,padding=1)\n self.conv7 = nn.Conv2d(8,1, kernel_size=3,stride=1,padding=1)\n \n def forward(self, x):\n x = nnFunctions.leaky_relu(self.conv1(x))\n x = nnFunctions.leaky_relu(self.conv2(x))\n x = nnFunctions.leaky_relu(self.conv3(x))\n x=nnFunctions.leaky_relu(self.Pconv1(x))+nnFunctions.leaky_relu(self.Pconv2(x))+nnFunctions.leaky_relu(self.Pconv3(x))\n x=nnFunctions.leaky_relu(self.conv6(x))\n x=nnFunctions.leaky_relu(self.conv7(x))\n return x\n<ECODE> I use loss functions: <SCODE>def train(train_loader,net,criterion,epochs,total_samples,learning_rate):\n prev_loss=0\n optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)\n \n for epoch in range(int(epochs)): # loop over the dataset multiple times\n running_loss = 0.0\n for i,data in enumerate(train_loader):\n inputs,labels=data\n # wrap them in Variable\n inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()\n \n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n \n loss = criterion(outputs, labels)\n loss.backward() \n optimizer.step()\n \n # print statistics\n running_loss += loss.data[0]\n **print (i,running_loss)**\n print('Finished Training')\n return net\n<ECODE> Kindly help",
"isAccepted": false,
"likes": null,
"poster": "sarthak1996"
},
{
"contents": "<SCODE>def forward(self, x):\n x = nnFunctions.leaky_relu(self.conv1(x))\n print x.data\n x = nnFunctions.leaky_relu(self.conv2(x))\n print x.data\n x = nnFunctions.leaky_relu(self.conv3(x))\n print x.data \n x=nnFunctions.leaky_relu(self.Pconv1(x))+nnFunctions.leaky_relu(self.Pconv2(x))+nnFunctions.leaky_relu(self.Pconv3(x))\n\t\tprint x.data\n x=nnFunctions.leaky_relu(self.conv6(x))\n x=nnFunctions.leaky_relu(self.conv7(x))\n return x\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "chenyuntc"
},
{
"contents": "Even if my learning rate is high there should be increase in loss right but the code gives nan. I think the output should be a number which is greater than previous loss but not nan.",
"isAccepted": false,
"likes": null,
"poster": "sarthak1996"
},
{
"contents": "Is your loss printing out NaN from the beginning or are you getting numbers which constantly increase and become NaN eventually?",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "sarthak1996"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "live-wire"
}
] | false |
Any easy workaround for using weighted smooth_l1 loss for different output units? | null | [
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "ypxie"
},
{
"contents": "there isn’t an easy work-around. you can call the loss in a for-loop over the mini-batch.",
"isAccepted": false,
"likes": 1,
"poster": "smth"
}
] | false |
Data loader with a custom dataset | null | [
{
"contents": "Thanks",
"isAccepted": false,
"likes": null,
"poster": "VladislavPrh"
},
{
"contents": "you should look at our examples, which give you better hints on how to achieve this:",
"isAccepted": false,
"likes": 1,
"poster": "smth"
}
] | false |
Simple neural network not converging | null | [
{
"contents": "<SCODE>#With autograd\nimport torch\nfrom torch.autograd import Variable\n\ndtype = torch.cuda.FloatTensor\n\nN, D_in, H, D_out = 64, 1000, 100, 10\n\nx = Variable(torch.randn(N, D_in).type(dtype), requires_grad = False)\ny = Variable(torch.randn(N, D_out).type(dtype), requires_grad = False)\n\nw1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad = True)\nw2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad = True)\n\nlearning_rate = 1e-6\nfor t in range(500):\n y_pred = x.mm(w1).clamp(min = 0).mm(w2)\n \n loss = (y_pred - y).pow(2).sum()\n print(t, loss.data[0])\n \n #w1.grad.data.zero_()\n #w2.grad.data.zero_()\n \n loss.backward()\n #print w1.grad.data\n #print w2.grad.data\n w1.data -= learning_rate * w1.grad.data\n w2.data -= learning_rate * w2.grad.data\n<ECODE> <SCODE>(0, 31723518.0)\n(1, 28070452.0)\n(2, 8525556.0)\n(3, 14738816.0)\n(4, 9347755.0)\n(5, 12841774.0)\n(6, 18114290.0)\n(7, 6447365.5)\n(8, 11224685.0)\n(9, 9882719.0)\n(10, 2951912.0)\n(11, 2978006.25)\n(12, 6616687.5)\n(13, 7743705.0)\n(14, 5883046.5)\n(15, 3643038.25)\n(16, 2570257.25)\n(17, 2455251.0)\n(18, 2659530.75)\n(19, 2724341.5)\n(20, 2513530.25)\n(21, 2057666.625)\n(22, 1586186.375)\n(23, 1254101.625)\n(24, 1110446.375)\n(25, 1110734.0)\n(26, 1145980.0)\n(27, 1071132.875)\n(28, 910926.4375)\n(29, 782463.5)\n(30, 719357.125)\n(31, 717793.9375)\n(32, 761821.125)\n(33, 756986.375)\n(34, 682688.4375)\n(35, 646783.5625)\n(36, 679672.0)\n(37, 676811.0)\n(38, 600790.3125)\n(39, 631020.375)\n(40, 692508.6875)\n(41, 696700.5625)\n(42, 615305.625)\n(43, 504780.4375)\n(44, 505154.0)\n(45, 507697.0625)\n(46, 498239.1875)\n(47, 478827.5)\n(48, 531659.3125)\n(49, 472687.5)\n(50, 433654.9375)\n(51, 504356.59375)\n(52, 475822.34375)\n(53, 465258.40625)\n(54, 490428.53125)\n(55, 542419.6875)\n(56, 480332.28125)\n(57, 456323.03125)\n(58, 548866.5)\n(59, 460200.1875)\n(60, 582967.375)\n(61, 467767.125)\n(62, 399487.1875)\n(63, 525414.75)\n(64, 563015.5)\n(65, 630127.125)\n(66, 339907.625)\n(67, 485001.0625)\n(68, 541414.6875)\n(69, 637931.8125)\n(70, 424327.5)\n(71, 444804.25)\n(72, 542814.6875)\n(73, 624015.6875)\n(74, 405953.71875)\n(75, 523452.90625)\n(76, 604742.4375)\n(77, 624313.0625)\n(78, 665899.8125)\n(79, 796917.625)\n(80, 1059727.875)\n(81, 1661096.375)\n(82, 3876985.5)\n(83, 5157832.0)\n(84, 2041864.25)\n(85, 5117962.0)\n(86, 5582782.0)\n(87, 9489012.0)\n(88, 28304358.0)\n(89, 92396984.0)\n(90, 135757312.0)\n(91, 30141958.0)\n(92, 36246224.0)\n(93, 63904096.0)\n(94, 27171200.0)\n(95, 22396498.0)\n(96, 18266130.0)\n(97, 25967810.0)\n(98, 23575290.0)\n(99, 8453866.0)\n(100, 13056855.0)\n(101, 7837615.5)\n(102, 10242168.0)\n(103, 8700571.0)\n(104, 178546768.0)\n(105, 311015104.0)\n(106, 264007536.0)\n(107, 31766490.0)\n(108, 79658920.0)\n(109, 19210790.0)\n(110, 20177744.0)\n(111, 24349004.0)\n(112, 158815472.0)\n(113, 51590388.0)\n(114, 42294844.0)\n(115, 20198332.0)\n(116, 26488356.0)\n(117, 14971826.0)\n(118, 296145664.0)\n(119, 11408661504.0)\n(120, 472693047296.0)\n(121, 1.5815737104924672e+16)\n(122, 2.7206068612442637e+30)\n(123, inf)\n(124, nan)\n(125, nan)\n(126, nan)\n(127, nan)\n(128, nan)\n(129, nan)\n(130, nan)\n(131, nan)\n(132, nan)\n(133, nan)\n(134, nan)\n(135, nan)\n(136, nan)\n(137, nan)\n(138, nan)\n(139, nan)\n(140, nan)\n(141, nan)\n(142, nan)\n(143, nan)\n(144, nan)\n(145, nan)\n(146, nan)\n(147, nan)\n(148, nan)\n(149, nan)\n(150, nan)\n(151, nan)\n(152, nan)\n(153, nan)\n(154, nan)\n(155, nan)\n(156, nan)\n(157, nan)\n(158, nan)\n(159, nan)\n(160, nan)\n(161, nan)\n(162, nan)\n(163, nan)\n(164, nan)\n(165, nan)\n(166, nan)\n(167, nan)\n(168, nan)\n(169, nan)\n(170, nan)\n(171, nan)\n(172, nan)\n(173, nan)\n(174, nan)\n(175, nan)\n(176, nan)\n(177, nan)\n(178, nan)\n(179, nan)\n(180, nan)\n(181, nan)\n(182, nan)\n(183, nan)\n(184, nan)\n(185, nan)\n(186, nan)\n(187, nan)\n(188, nan)\n(189, nan)\n(190, nan)\n(191, nan)\n(192, nan)\n(193, nan)\n(194, nan)\n(195, nan)\n(196, nan)\n(197, nan)\n(198, nan)\n(199, nan)\n(200, nan)\n(201, nan)\n(202, nan)\n(203, nan)\n(204, nan)\n(205, nan)\n(206, nan)\n(207, nan)\n(208, nan)\n(209, nan)\n(210, nan)\n(211, nan)\n(212, nan)\n(213, nan)\n(214, nan)\n(215, nan)\n(216, nan)\n(217, nan)\n(218, nan)\n(219, nan)\n(220, nan)\n(221, nan)\n(222, nan)\n(223, nan)\n(224, nan)\n(225, nan)\n(226, nan)\n(227, nan)\n(228, nan)\n(229, nan)\n(230, nan)\n(231, nan)\n(232, nan)\n(233, nan)\n(234, nan)\n(235, nan)\n(236, nan)\n(237, nan)\n(238, nan)\n(239, nan)\n(240, nan)\n(241, nan)\n(242, nan)\n(243, nan)\n(244, nan)\n(245, nan)\n(246, nan)\n(247, nan)\n(248, nan)\n(249, nan)\n(250, nan)\n(251, nan)\n(252, nan)\n(253, nan)\n(254, nan)\n(255, nan)\n(256, nan)\n(257, nan)\n(258, nan)\n(259, nan)\n(260, nan)\n(261, nan)\n(262, nan)\n(263, nan)\n(264, nan)\n(265, nan)\n(266, nan)\n(267, nan)\n(268, nan)\n(269, nan)\n(270, nan)\n(271, nan)\n(272, nan)\n(273, nan)\n(274, nan)\n(275, nan)\n(276, nan)\n(277, nan)\n(278, nan)\n(279, nan)\n(280, nan)\n(281, nan)\n(282, nan)\n(283, nan)\n(284, nan)\n(285, nan)\n(286, nan)\n(287, nan)\n(288, nan)\n(289, nan)\n(290, nan)\n(291, nan)\n(292, nan)\n(293, nan)\n(294, nan)\n(295, nan)\n(296, nan)\n(297, nan)\n(298, nan)\n(299, nan)\n(300, nan)\n(301, nan)\n(302, nan)\n(303, nan)\n(304, nan)\n(305, nan)\n(306, nan)\n(307, nan)\n(308, nan)\n(309, nan)\n(310, nan)\n(311, nan)\n(312, nan)\n(313, nan)\n(314, nan)\n(315, nan)\n(316, nan)\n(317, nan)\n(318, nan)\n(319, nan)\n(320, nan)\n(321, nan)\n(322, nan)\n(323, nan)\n(324, nan)\n(325, nan)\n(326, nan)\n(327, nan)\n(328, nan)\n(329, nan)\n(330, nan)\n(331, nan)\n(332, nan)\n(333, nan)\n(334, nan)\n(335, nan)\n(336, nan)\n(337, nan)\n(338, nan)\n(339, nan)\n(340, nan)\n(341, nan)\n(342, nan)\n(343, nan)\n(344, nan)\n(345, nan)\n(346, nan)\n(347, nan)\n(348, nan)\n(349, nan)\n(350, nan)\n(351, nan)\n(352, nan)\n(353, nan)\n(354, nan)\n(355, nan)\n(356, nan)\n(357, nan)\n(358, nan)\n(359, nan)\n(360, nan)\n(361, nan)\n(362, nan)\n(363, nan)\n(364, nan)\n(365, nan)\n(366, nan)\n(367, nan)\n(368, nan)\n(369, nan)\n(370, nan)\n(371, nan)\n(372, nan)\n(373, nan)\n(374, nan)\n(375, nan)\n(376, nan)\n(377, nan)\n(378, nan)\n(379, nan)\n(380, nan)\n(381, nan)\n(382, nan)\n(383, nan)\n(384, nan)\n(385, nan)\n(386, nan)\n(387, nan)\n(388, nan)\n(389, nan)\n(390, nan)\n(391, nan)\n(392, nan)\n(393, nan)\n(394, nan)\n(395, nan)\n(396, nan)\n(397, nan)\n(398, nan)\n(399, nan)\n(400, nan)\n(401, nan)\n(402, nan)\n(403, nan)\n(404, nan)\n(405, nan)\n(406, nan)\n(407, nan)\n(408, nan)\n(409, nan)\n(410, nan)\n(411, nan)\n(412, nan)\n(413, nan)\n(414, nan)\n(415, nan)\n(416, nan)\n(417, nan)\n(418, nan)\n(419, nan)\n(420, nan)\n(421, nan)\n(422, nan)\n(423, nan)\n(424, nan)\n(425, nan)\n(426, nan)\n(427, nan)\n(428, nan)\n(429, nan)\n(430, nan)\n(431, nan)\n(432, nan)\n(433, nan)\n(434, nan)\n(435, nan)\n(436, nan)\n(437, nan)\n(438, nan)\n(439, nan)\n(440, nan)\n(441, nan)\n(442, nan)\n(443, nan)\n(444, nan)\n(445, nan)\n(446, nan)\n(447, nan)\n(448, nan)\n(449, nan)\n(450, nan)\n(451, nan)\n(452, nan)\n(453, nan)\n(454, nan)\n(455, nan)\n(456, nan)\n(457, nan)\n(458, nan)\n(459, nan)\n(460, nan)\n(461, nan)\n(462, nan)\n(463, nan)\n(464, nan)\n(465, nan)\n(466, nan)\n(467, nan)\n(468, nan)\n(469, nan)\n(470, nan)\n(471, nan)\n(472, nan)\n(473, nan)\n(474, nan)\n(475, nan)\n(476, nan)\n(477, nan)\n(478, nan)\n(479, nan)\n(480, nan)\n(481, nan)\n(482, nan)\n(483, nan)\n(484, nan)\n(485, nan)\n(486, nan)\n(487, nan)\n(488, nan)\n(489, nan)\n(490, nan)\n(491, nan)\n(492, nan)\n(493, nan)\n(494, nan)\n(495, nan)\n(496, nan)\n(497, nan)\n(498, nan)\n(499, nan)\n<ECODE> Can someone please show me what is wrong here?",
"isAccepted": false,
"likes": null,
"poster": "nafizh1"
},
{
"contents": "Uncomment the first pair of comments in the for loop. The gradient buffers have to be manually reset before fresh gradients are calculated. Your for loop should be: <SCODE>for t in range(500):\n y_pred = x.mm(w1).clamp(min = 0).mm(w2)\n \n loss = (y_pred - y).pow(2).sum()\n print(t, loss.data[0])\n \n w1.grad.data.zero_()\n w2.grad.data.zero_()\n \n loss.backward()\n #print w1.grad.data\n #print w2.grad.data\n w1.data -= learning_rate * w1.grad.data\n w2.data -= learning_rate * w2.grad.data\n<ECODE>",
"isAccepted": false,
"likes": 3,
"poster": "pranav"
},
{
"contents": "In the latest version of pytorch, uncommenting those 2 lines shows error. That is why I uncommented them. <SCODE>AttributeError: 'NoneType' object has no attribute 'data'\n<ECODE>",
"isAccepted": false,
"likes": null,
"poster": "nafizh1"
},
{
"contents": "I had this issue too and found out that in the latest the grad data is not created (not just initialized but its not even created) until backward is called and gradients need to be computed. The above code basically checks: If first iteration (i.e, t = 0) then don’t zero the data else zero out the grad data.",
"isAccepted": false,
"likes": 1,
"poster": "shaun"
},
{
"contents": "Thanks, that solved the problem. But I was wondering if there is a more elegant solution. Also, why do you have to zero the grad data every time in a loop? I assume that is something that should take care of itself.",
"isAccepted": false,
"likes": null,
"poster": "nafizh1"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "tom"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "Chun_Li"
},
{
"contents": "@Chun_Li Yes we must.",
"isAccepted": false,
"likes": null,
"poster": "shaun"
},
{
"contents": "",
"isAccepted": false,
"likes": null,
"poster": "cyyyyc123"
}
] | false |