id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st100800 | Hello, I’am a new to pytorch an python. So… I don’t know how to reshape tensors exactly.
For example, I have a [2, 512, 10, 10] <-- (batch_num, C, H ,W) tensor.
Let’s assume that I apply some channel-wise operations to each cells of the tensor. In this case, the operations applied 200 times (2 * 10 * 10, batch_num * H * W) to the tensor. And the shape of the result is like [200, 512].
In this case, I hope to reshape the result tensor ([200, 512]) to original shape [2, 512, 10, 10].
I think I can reshape the tensor using “view” function easily. However I am worrying that the values of the tensor are mixed. (every cell has to locate original position)
So… Could you tell me how to reshape the tensor properly?
( Sorry, I can’t write English well.) |
st100801 | I think you need to either keep the batch and channel dimension or combine those two, but you shouldn’t combine the batch, height, and width dimensions. So your resulting tensor should be (100, 1024), then you do tensor.reshape(H, W, batch_num, C).permute(2, 3, 0, 1). You’ll also have to pay attention to how you permute the tensor before your operation. Honestly, I would try to have your resulting tensor be of dimensions (1024, 100) or (2, 512, 100) and not the transpose, but it’s not 100% necessary to do so. There really is not enough information here to see exactly what you are doing. If you could provide some code, or mathematical equations, it would be easier to help you. |
st100802 | Consider the following two algorithms with regards to the constructing the dataset loader:
Before looping through the training epochs
Within the loop for each epoch
Does the first algorithm have a SIGNIFICANT advantage over the second one? |
st100803 | In general: No it hasn’t. However, I try to reduce repeatedly executed code to a necessary minimum. Saying that I usually initialize it before the loop since there are no disadvantages doing so. |
st100804 | I am getting the error when running the Deep Image Prior’s Super-Resolution. I have attached the screenshots below. Can someone tell me where I am going wrong? Also, some people asked me to reduce the batch size and I dont know how to do it.
p.s. I am using CUDA 9.2 and GPU 12GB which is sufficient to run this.
Screenshot-2018-9-3 super-resolution.png1920×907 107 KB
Screenshot-2018-9-3 super-resolution(3).png1920×907 96.7 KB
Screenshot-2018-9-3 super-resolution(1).png1920×907 250 KB
Screenshot-2018-9-3 super-resolution(2).png1920×907 88.3 KB
Screenshot-2018-9-3 super-resolution(4).png1920×907 83.8 KB |
st100805 | The indexing array is jagged, how can I make it?
import torch
a = torch.randn(25, 300)
i = [[0,0],[1,1,1],[2,2,2,2]]
j = [[12,10],[13,24,26], [8,9,10,11]]
a[i,j] #error <-expected sequence of length 2 at dim 1 (got 3) |
st100806 | I’m not sure why you need the nested list. Aren’t the indices you really just need something like this?
i = [12,10,13,24,26,8,9,10,11]
j = [0,0,1,1,1,2,2,2,2]
If that is the case, you can just do the following:
j = torch.tensor([12,10,13,24,26,8,9,10,11]).long()
i = torch.Tensor([0,0,1,1,1,2,2,2,2]).long()
a[i,j]
And if the output size is not what you are looking for, just use
b = a[i,j].view(3,3) |
st100807 | Because I want to keep the jagged shape
[ [a[0,12], a[0,10]],
[a[1,13], a[1,24], a[1,26]],
...
] |
st100808 | I wanted to enable attention for my multi-to-one LSTM model. However, I did not find any attention layers in the official doc. Is that implemented? If not, is there any good simple references on how to implement that? |
st100809 | I am not sure that’s your multi-to-one LSTM means, so just an example.
In my practice, my model’s input is a video, and I need to assign an attention weight for each frame. However, due to the limit of the memory, it’s not possible to handle the whole video in one input batch.
So I use a fc layer with one output value as the attention layer. Suppose the input video has 4x frames, but my model can only handle x frames each time. I will let my model run 4 batches and output 4x values by the fc layer. After that, I can apply a softmax to those 4x values and consequently solve the problem. |
st100810 | In Pytorch, how would you use a prediction from a previous timestep as input into the next timestep? I’m guessing it’s not possible with torch.nn.LSTM ? |
st100811 | Solved by PengZhenghao in post #2
In the document, the examples using LSTM look like:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
Where (hn, cn) contains the hidden state and cell state of current LSTM. So y… |
st100812 | In the document, the examples using LSTM look like:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
Where (hn, cn) contains the hidden state and cell state of current LSTM. So you can use (hn, cn) outputed from the previous timestep as the input of LSTM at the next timestep. |
st100813 | Hi, I have the following tensor size q=[150, 96, 96, 240] (here: [batch_size, sentence_length, sentence_length, weight_size]), I need to multiply it with a weight vector of size [240] and either reshape or use convolution to make q of size [150, 96, 96], basically combining the last [96, 96, 240] into size [96, 96]. I have a short code snippet, but I don’t know how to properly reshape, and if doing convolution, in this case, makes more sense (and if so, how to do it).
Any help appreciated.
import torch
import torch.nn as nn
import numpy as np
class SomeClass(nn.Module):
def __init__(self):
super().__init__()
# create a learnable weight vector
self.my_weights= nn.Parameter(240)
nn.init.xavier_normal_(self.my_weights)
def forward(q):
# q = np.random.randn(150, 96, 96, 240)
# a = torch.from_numpy(q)
# size: [batch_size, sentence_length, sentence_length, weight_size]
# multiply the weight vector with the last column of the tensor, is this correct?
k= torch.matmul(q, self.my_weights)
# reshape or do a 2d Convolution
v = TODO(k) [150, 96, 96, 240] --> [150, 96, 96, 1] --> [150, 96, 96]
Visual representation I drew up: https://imgur.com/a/yuIc3t4 4 |
st100814 | How do you want to reduce the last dimension?
Do you want to sum it?
q = torch.randn(15, 9, 9, 2)
w = torch.randn(15, 9, 9)
(q*w.unsqueeze(3)).sum(3) |
st100815 | I only have one tensor q=[150, 96, 96, 240], I have to somehow transform it into q=[150, 96, 96]. In your example, there is an additional w of size [150, 96, 96], which I don’t have. A summation could work I guess, but when I try it on q, it doesn’t do anything in my case: q.sum(3), doesn’t take away the last column. |
st100816 | Sorry my bad!
I’ve read your problem and thought I still had it in mind when I created the example.
Apparently I’ve mixed up the shapes.
Would this work then:
q = torch.randn(15, 9, 9, 2)
w = torch.randn(1, 1, 1, 2)
(q*w).sum(3) |
st100817 | Sorry, I don’t think I fully understand what you are suggesting.
So, instead of
self.my_weights= nn.Parameter(torch.FloatTensor(240, 240)
I should somehow make it into:
self.my_weights= nn.Parameter(torch.FloatTensor([1,1,1,240], [1,1,1,240])) <-- that doesn’t seem to work in PyTorch. How can I initialize parameters with multiple dimensions properly?
So, based on my code snippet in the original post, I would do something like the following:
import torch
import torch.nn as nn
import numpy as np
class SomeClass(nn.Module):
def __init__(self):
super().__init__()
# create a learnable weight vector
self.my_weights= nn.Parameter(torch.FloatTensor([1,1,1,240], [1,1,1,240]))
nn.init.xavier_normal_(self.my_weights)
def forward(q):
# q = np.random.randn(150, 96, 96, 240)
# size: [batch_size, sentence_length, sentence_length, weight_size]
k = (q*self.my_weights).sum(3)
Is this what you are suggesting or am I misunderstanding you? Also, thanks for looking into this! |
st100818 | Maybe I’m misunderstanding your use case completely, so feel free to correct me.
You’ve said
ivan-bilan:
I need to multiply it with a weight vector of size [240]
, but somehow you are trying to initialize a tensor of shape [240, 240] as my_weights.
If you just want random numbers, you can reuse my code example: my_weights = nn.Parameter(torch.randn(1, 1, 1, 240)). |
st100819 | Oh, I think I had nn.Linear there before, so it was 240 size in and 240 size out. I messed the example up because of that. It was supposed to be just 240, my bad!
I tried what you have suggested, and it looks like the following now:
import torch
import torch.nn as nn
import numpy as np
class SomeClass(nn.Module):
def __init__(self):
super().__init__()
# create a learnable weight vector
self.my_weights= nn.Parameter(torch.rand(1, 1, 1, 240)) # I need the vector to be learned weights
nn.init.xavier_normal_(self.my_weights)
def forward(q):
# q = np.random.randn(150, 96, 96, 240)
# size: [batch_size, sentence_length, sentence_length, weight_size]
k = (q*self.my_weights)
k = k.sum(3)
But I am very quickly getting an out of memory error on the multiplication part:
k = (q*self.my_weights)
RuntimeError: CUDA error: out of memory
The memory jumps to over 6Gb VRAM (my limit on Nvidia 980Ti), while it usually only needs about 2-3 Gb. Any ideas about this? Thanks.
Also, what does the sum(3) actually do in this case? I hope the last column doesn’t just disappear, I need it applied to the previous two (as shown in the drawing I provided). |
st100820 | I think, you can use conv2d to achieve what you want. Something like:
data = torch.randn(150, 96, 96, 240).cuda()
conv = nn.Conv2d(240, 1)
output = conv(data.permute((0,3,1,2))).squeeze()
P. S. I didn’t check for syntax errors. |
st100821 | Thanks, I think this is what I needed. Unfortunately, the first part of initializing the weights doesn’t work with this solution for some reason.
Now, I have the following:
import torch
import torch.nn as nn
import numpy as np
class SomeClass(nn.Module):
def __init__(self):
super().__init__()
# create a learnable weight vector
self.my_weights= nn.Parameter(torch.FloatTensor(240)) # I need the vector to be learned weights
nn.init.xavier_normal_(self.my_weights)
self.conv = nn.Conv2d(240, kernel_size=1, out_channels=1) # are the arguments correct?
def forward(q):
# q = np.random.randn(150, 96, 96, 240)
# size: [batch_size, sentence_length, sentence_length, weight_size]
k = (q*self.my_weights)
k = self.conv(k.permute((0, 3, 1, 2))).squeeze()
I get the following error:
raise ValueError("Fan in and fan out cannot be computed for tensor with less than 2 dimensions")
ValueError: Fan in and fan out cannot be computed for tensor with less than 2 dimensions
(cuda_pytorch)
which is apparently coming from: nn.init.xavier_normal_(self.my_weights)
If I don’t do nn.init.xavier_normal_(self.my_weights), the loss is NaN.
Would doing:
self.concat_linear = nn.Linear(240, 240)
nn.init.kaiming_normal_(self.concat_linear.weight)
k = self.concat_linear(q)
be an equivalent of:
self.my_weights= nn.Parameter(torch.FloatTensor(240))
nn.init.xavier_normal_(self.my_weights)
k = (q*self.my_weights)
?
However, when I do that, I get the following later on in my code:
File "...\lib\site-packages\torch\tensor.py", line 381, in __iter__
raise TypeError('iteration over a 0-d tensor')
TypeError: iteration over a 0-d tensor |
st100822 | In my view, you do not need separate weights. nn.Conv2d already has learnable weights. |
st100823 | Thanks, I tried using only Conv2d without the nn.Parameter. Unfortunately, still getting the following error:
raise TypeError('iteration over a 0-d tensor')
TypeError: iteration over a 0-d tensor |
st100824 | Sure thing. The more or less full code looks like this (with random data):
import torch
import torch.nn as nn
import numpy as np
class SomeClass(nn.Module):
def __init__(self):
super().__init__()
self.relu = nn.RReLU()
self.conv = nn.Conv2d(240, kernel_size=1, out_channels=1)
def forward(q, k, values):
# q = k = values
q = np.random.randn(150, 96, 120)
k = np.random.randn(150, 96, 120)
# now transform both into one tensor of size: [150, 96, 96, 240]
# not specifically related to the question above though
a = torch.from_numpy(q)
b = torch.from_numpy(k)
bs, s, v = a.size()
a_ = a.repeat(1, 1, s).view(bs, s*s, v)
b_ = b.repeat(1, s, 1)
concat_vec = torch.stack((a_, b_), 2).view(bs, s, s, -1)
# concat_vec size:
# [batch_size, sentence_length, sentence_length, weight_size] = [150, 96, 96, 240]
# now transform into [150, 96, 96]
attention = self.conv(concat_vec.permute((0, 3, 1, 2))).squeeze()
attention = self.relu(attention)
output = torch.bmm(attention, values)
return output
In case this helps, the code is the modification of the scaled-dot product attention from the Self-attention paper (https://arxiv.org/abs/1703.03130), the modification is the implementation of the concatenation weighting function from this paper: https://arxiv.org/abs/1711.07971 (Section 3.2).
Originally, the code looks like this without the modification and works just fine:
import torch
import torch.nn as nn
import numpy as np
class SomeClass(nn.Module):
def __init__(self):
super().__init__()
def forward(q, k, values):
attn = torch.bmm(q, k.transpose(1, 2))
attn = self.softmax(attn)
output = torch.bmm(attn, values)
return output
Error I am getting (happens later on in the code that uses the transformed output):
File "...\lib\site-packages\torch\tensor.py", line 381, in __iter__
raise TypeError('iteration over a 0-d tensor')
TypeError: iteration over a 0-d tensor |
st100825 | Hi everyone,
I am trying to implement graph convolutional layer (as described in Semi-Supervised Classification with Graph Convolutional Networks 50) in PyTorch.
For this I need to perform multiplication of the dense feature matrix X by a sparse adjacency matrix A (sparse x dense -> dense). I don’t need to compute the gradients with respect to the sparse matrix A.
As mentioned in this thread, torch.mm should work in this case, however, I get the
TypeError: Type torch.sparse.FloatTensor doesn't implement stateless method addmm
class GraphConv(nn.Module):
def __init__(self, size_in, size_out):
super(GraphConv, self).__init__()
self.W = nn.parameter.Parameter(torch.Tensor(size_in, size_out))
self.b = nn.parameter.Parameter(torch.Tensor(size_out))
def forward(self, X, A):
return torch.mm(torch.mm(A, X), self.W) + self.b
A # torch.sparse.FloatTensor, size [N x N]
X # torch.FloatTensor, size [N x size_in]
A = torch.autograd.Variable(A, requires_grad=False)
X = torch.autograd.Variable(X, requires_grad=False)
# If I omit the two lines above, I get invalid argument error (torch.FloatTensor, Parameter) for torch.mm
gcn = GraphConv(X.size()[1], size_hidden)
gcn(X, A) # error here
Is there something I am doing wrong, or is this functionality simply not present in PyTorch yet? |
st100826 | Hi Oleksandr - its a little hard to tell from your post which mm is actually triggering the error. Could you possibly provide a script for that triggers the issue? We do support mm for sparse x dense I believe. |
st100827 | Here is a simple script that reproduces the error: https://gist.github.com/shchur/5bf7eca2e68a44ae5c15947b553f9f1e 21
I get the following error when running this code
Screenshot from 2017-08-11 16-30-59.png1337×581 82.3 KB |
st100828 | Here is an even clearer example
gist.github.com
https://gist.github.com/shchur/9428f6f00dd8e6ea617640d51fc7c8a9 151
sparse_dense_mm.py
import numpy as np
import scipy.sparse as sp
import torch
def to_torch_sparse_tensor(M):
"""Convert Scipy sparse matrix to torch sparse tensor."""
M = M.tocoo().astype(np.float32)
indices = torch.from_numpy(np.vstack((M.row, M.col))).long()
values = torch.from_numpy(M.data)
shape = torch.Size(M.shape)
This file has been truncated. show original |
st100829 | I think there are a few things here:
I believe the issue with the first thing is that internally when calling mm on Variables, it calls torch.addmm and we don’t have the proper function defined on Sparse Tensors. (We actually have this implemented, but I think its named incorrectly)
I’m less certain about nn.Parameter, I’ll have to let someone else answer that.
I will file an issue for #1. |
st100830 | Here is the GitHub issue on this topic https://github.com/pytorch/pytorch/issues/2389 148. Apparently, this functionality is not supported at the moment, so we should wait for the feature to be added. |
st100831 | There is already a patch for torch.mm. You can use that in the meantime. I am using it currently and it if working fine.
Does PyTorch support autograd on sparse matrix? autograd
Hi there,
I am a beginner trying to learn PyTorch and there is one question bugging me. I know PyTorch support sparse x dense -> dense function in torch.mm. However, I don’t think it currently supports autograd on sparse variables (say sparse matrix). Examples are:
x = torch.sparse.FloatTensor(2,10)
y = torch.FloatTensor(10, 5)
sx = torch.autograd.Variable(x)
sy = torch.autograd.Variable(y)
torch.mm(sx, sy) # fails
Errors: TypeError: Type torch.sparse.FloatTensor doesn’t implement stateless … |
st100832 | Has there been any updates on doing sparse x dense operation with Variables?
I can do the following,
import torch
from torch.autograd import Variable as V
i = torch.LongTensor([[0, 1, 1], [1 ,1 ,1]])
v = torch.FloatTensor([3, 4, 5])
m = torch.sparse.FloatTensor(i, v, torch.Size([2,3]))
m2 = torch.randn(3,2)
print(torch.mm(m, m2))
But adding this breaks it,
M = V(m)
M2 = V(m2)
print(torch.mm(M, M2))
Traceback (most recent call last):
File "test_sparse.py", line 11, in <module>
print(torch.mm(M, M2))
RuntimeError: Expected object of type Variable[torch.sparse.FloatTensor] but found type Variable[torch.FloatTensor] for argument #1 'mat2' |
st100833 | I converted it to a np.float32.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import scipy.sparse as sp
n_samples = 100
n_features = 20
A_ = sp.random(n_samples, n_samples, dtype=np.float32)
X_ = np.random.random([n_samples, n_features]).astype(np.float32)
#↑here
def to_torch_sparse_tensor(M):
M = M.tocoo().astype(np.float32)
indices = torch.from_numpy(np.vstack((M.row, M.col))).long()
values = torch.from_numpy(M.data)
shape = torch.Size(M.shape)
T = torch.sparse.FloatTensor(indices, values, shape)
return T
X = torch.from_numpy(X_)
A = to_torch_sparse_tensor(A_)
class GraphConv(nn.Module):
def __init__(self, size_in, size_out):
super(GraphConv, self).__init__()
self.W = nn.parameter.Parameter(torch.Tensor(size_in, size_out))
self.b = nn.parameter.Parameter(torch.Tensor(size_out))
# Initialize weights
variance = 2 / (size_in + size_out)
self.W.data.normal_(0.0, variance)
self.b.data.normal_(0.0, variance)
def forward(self, X, A):
return torch.mm(torch.mm(A, X), self.W) + self.b
gcn = GraphConv(X.size()[1], 128)
gcn(X, A)
tensor([[ 2.5033e-02, 3.0702e-02, -4.3435e-02, …, 3.1698e-02,
-1.2819e-02, 4.3160e-02],
[ 1.3049e-02, -3.4871e-03, -2.6543e-02, …, 8.5519e-03,
7.0735e-03, 1.4211e-02],
[ 5.3494e-02, 3.7187e-02, -5.0533e-02, …, 2.2295e-02,
-1.8713e-02, 4.8472e-02],
…,
[ 2.3762e-02, 7.9290e-02, -6.0602e-02, …, 6.9588e-02,
-9.7072e-02, 1.3955e-01],
[ 3.3650e-02, 3.1555e-02, -2.4376e-02, …, 2.0983e-02,
2.0457e-03, 5.1157e-02],
[ 1.4304e-02, -1.9657e-02, -1.5910e-02, …, 3.1639e-03,
9.0677e-03, 8.1202e-03]]) |
st100834 | I have encountered the error “CUDNN_STATUS_INTERNAL_ERROR”, the solution I found has to reboot. Is there any way to solve it without a reboot?
The codes are similar to the following, but I couldn’t reproduce it without knowing the reason…
import torch
import torch.nn as nn
a = torch.randn([32, 1280, 7, 7])
conv1x1 = nn.Conv2d(1280, 1280, kernel_size=1, stride=1)
a = conv1x1(a) |
st100835 | I want to parallelize a code that looks roughly like this:
L = ModuleList()
x = torch.randn(batch_size, n, d)
y = torch.zeros(batch_size, n, d)
idx = list of list of indices (different lengths)
for i, module in enumerate(L):
zeros[:,idx[i],:] += module(x[:,idx[i], :])
Note that some idx[i][j] can be equal to another idx[i’][j’].
Since the loop iterations can be done in any order I’m trying to parallelize this for loop. Is there any way to do it?
Thanks! |
st100836 | It doesn’t seem like there’s any diagonal normal Distribution. Is there a reason for this? I would personally find it extremely handy since I often needing diagonal normals. Using Normal doesn’t suffice for me since I want things with event_shape = [d]. |
st100837 | And I should add that it’s a real pain to go from [batch_size, dim] to [batch_size, dim, dim], expanding out diagonal matrices tensor[i, :, :]. |
st100838 | you can do this
>>> result = torch.zeros(2, 4, 4)
>>> sigma = torch.tensor([0, 0.01, 1, 10])
>>> result.diagonal(dim1=1, dim2=2).normal_().mul_(sigma)
tensor([[ 0.0000, -0.0064, 0.6829, -2.7818],
[ 0.0000, 0.0084, -1.2700, 10.5473]])
>>> result
tensor([[[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.0064, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.6829, 0.0000],
[ 0.0000, 0.0000, 0.0000, -2.7818]],
[[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0084, 0.0000, 0.0000],
[ 0.0000, 0.0000, -1.2700, 0.0000],
[ 0.0000, 0.0000, 0.0000, 10.5473]]]) |
st100839 | To clarify: I’m looking for a MultivariateNormal Distribution but with a diagonal covariance matrix. Just sampling is not the issue: I also need log_prob, entropy, and so on. |
st100840 | Outside of an autograd function, the following code shows that b requires_grad, as I’d expect to happen:
import numpy as np
import torch
from torch import nn, optim
N = 5
K = 3
a = nn.Parameter(torch.rand(N, K))
idxes = torch.from_numpy(np.random.choice(N, K, replace=True))
print('a.requires_grad', a.requires_grad)
b = a[idxes]
print('b.requires_grad', b.requires_grad)
print('type(a)', type(a))
print('type(b)', type(b))
output:
a.requires_grad True
b.requires_grad True
type(a) <class 'torch.nn.parameter.Parameter'>
type(b) <class 'torch.Tensor'>
however, if I run this inside an autograd function, b no longer requires_grad:
import numpy as np
import torch
from torch import nn, optim, autograd
def run2():
class my_function(autograd.Function):
@staticmethod
def forward(ctx, x):
print('x.requires_grad', x.requires_grad)
N, K = x.size()
idxes = torch.from_numpy(np.random.choice(N, K, replace=True))
z = x[idxes]
print('z.requires_grad', z.requires_grad)
return z
my_function = my_function.apply
N = 5
K = 3
# a = torch.rand(N, K, requires_grad=True)
a = torch.rand(N, K)
a = nn.Parameter(a)
my_function(a)
if __name__ == '__main__':
run2()
Output:
x.requires_grad True
z.requires_grad False
Why is the z in this case marked as not requiring grad? What can I do so that it does require grad, and so that gradients flowing through z will flow into x? |
st100841 | Solved by tom in post #2
Similar to your other question, you need with torch.enable_grad(): if you want to record the graph within the forward or backward of a autograd.Function.
Best regards
Thomas |
st100842 | Similar to your other question, you need with torch.enable_grad(): if you want to record the graph within the forward or backward of a autograd.Function.
Best regards
Thomas |
st100843 | Is there an inverse of sigmoid (logit from domain -1 to 1) in Pytorch. i.e. some inversion of the output of nn.Sigmoid()? The scipy logit function takes only 0 to 1 domain, and I’d like -1 to 1.
Thanks! |
st100844 | Hello,
note that the name sigmoid might mean different things to different groups of people.
Here, most commonly, sigmoid is sigmoid(x)= 1/(1+torch.exp(-x)) 78, mapping the real line to (0,1), so the inverse logit(y) = torch.log(p/(1-p)) 525 is defined on (0,1) only.
If you have renormalized sigmoid to -1+2/(1+torch.exp(-x)) to map to (-1, 1) you could use above logit with logit(1+0.5*y).
If you want the inverse of tanh, which is perhaps the most common mapping of the real line to (-1,1), you could code the inverse up yourself using torch.log, by using artanh(y) = 0.5*(torch.log(1+y)/(1-y)) 119.
Best regards
Thomas |
st100845 | Thanks, Thomas. This was a helpful clarification of the torch sigmoid implementation.
Best,
Michael |
st100846 | @tom, do you know of any activation function whose inverse has output range (0, inf)? I am trying to generate values that are always greater than zero, so when inverting the output of the generated value, I don’t want the logit for example to map sigmoid outputs < 0.5 to negative numbers |
st100847 | Softplus probably is the most common if you don’t want ReLU. Or pick one from the rectifier zoo at wilipedia 69 that does what you want (some are not positive, though).
Best regards
Thomas |
st100848 | You got the parenthesis of the log wrong, it should be:
artanh(y) = 0.5 * torch.log((1+y)/(1-y)) |
st100849 | malbergo:
do you know of any activation function whose inverse has output range (0, inf)
I know this is a bit necro, but … wouldnt a function whose inverse has output range (0, inf) mean that any input value less than 0 would be illegal? Wouldnt that destabilize the network some?
Alternatively, I guess taking the square sort of complies with this, if you always implicitly take the positive square root in the inverse direction, perhaps? |
st100850 | I have a class that holds many nn.Modules and several pieces of data that aren’t modules as well: e.g.
class MyClass(object):
def __init__(self):
self._module1 = ...
self._module2 = ...
self._not_a_module1 = ...
self._not_a_module1 = ...
I’d like to save and restore all of the data held by the class. Is it safe for me to directly pickle the class? e.g.
import pickle
c = MyClass()
with open("saved.txt", "w") as f:
pickle.dump(c, f)
# ... later
with open("saved.txt", "r") as f:
c = pickle.load(f)
Or is it necessary for me to call torch.save on the modules and separately save the non-module data on my own?
Related question: is directly pickling nn.Modules safe to do? Or do I have to save them using torch.save? |
st100851 | Hi,
torch.save() is a superset of pickle.dump(). It has the same capabilities (as it uses pickle under the hood), but adds more efficient handling of pytorch tensors.
Using torch.save() of both modules and non-modules should be the simplest solution for you.
I am not sure what will be the exact impact of using pickle.dump() on Module objects. If you do need to know, you should ping smth to know who you should ask about this. You can also check the implementation in this 60 python file. |
st100852 | Hello team,
Great work on PyTorch, keep the momentum. I wanted to try my hands on it with the launch of the new MultiLabeling Amazon forest satellite images on Kaggle.
Note: new users can only post 2 links in a post so I can’t direct link everything
I created the following code 252 as an example this weekend to load and train a model on Kaggle data and wanted to give you my feedback on PyTorch. I hope it helps you.
Loading data that is not from a regular dataset like MNIST or CIFAR is confusing and hard. I’m aware of the ImageFolder DataSet but that forces an unflexible hierarchy and just plain doesn’t work for multilabel or multiclass tasks.
First of all, it’s confusing because of the DataSet and DataLoader distinction which is non-obvious. I do think there is merit to keep those separated but documentation must be improved on the role of both. If possible I would rename DataSet to DataStorage or DataLocation to make it obvious that we have a pointer to a storage and an iterator to that storage.
Secondly, it’s hard because non of the examples show a real world dataset: a csv with a list of image paths and corresponding labels.
There is no validation split facilities. An example with how to use SubsetRandomSampler to create something similar to Scikitlearn train_test_split would be great. (See https://github.com/pytorch/pytorch/issues/1106 37). It should accept a percentage and a random seed at least or a Sklearn fold object (KFold, StratifiedKFold) at best. This is critical for use in Kaggle and other ML competitions. (I will implement a naive one for the competition)
There is no documentation about Data Augmentation. The following post mentions it. However as far as I understood the documentation if you have a 40000 images training dataset, even if you use PIL transforms you still get 40000 training samples. Data Augmentation would be to get +40 000 training samples per transformation done.
As I side-note, I believe data augmentation should be done at the DataLoader level as mentionned in Discussion about datasets and dataloaders.
Computing the shape after a view is non-trivial i.e. the 2304 in the following code for a 32x32x3 image
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(2304, 256)
self.fc2 = nn.Linear(256, 17)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(x.size(0), -1) # Flatten layer
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.sigmoid(x)
Points 5 or 6 would probably be best in a PyTorch wrapper but I will still mention them.
Early stopping would be nice to combat overfitting when loss doesn’t decrease for a certain number of epochs
Pretty printing of epochs, accuracy/loss |
st100853 | thanks a lot for the detailed feedback.
A tutorial on writing custom Datasets + Samplers and using transforms seems to be in order.
I’ll be tracking it on https://github.com/pytorch/tutorials/issues 857 and hope to make progress. |
st100854 | These are some really good points.
For number 3, you do need a new sampler to grab more than the actual number of samples per epoch… here is a quick example of a MultiSampler class which can be passed to a DataLoader to load more than the number of actual samples per epoch:
class MultiSampler(Sampler):
"""Samples elements more than once in a single pass through the data.
This allows the number of samples per epoch to be larger than the number
of samples itself, which can be useful for data augmentation.
"""
def __init__(self, nb_samples, desired_samples, shuffle=False):
self.data_samples = nb_samples
self.desired_samples = desired_samples
self.shuffle
def gen_sample_array(self):
n_repeats = self.desired_samples / self.data_samples
self.sample_idx_array = torch.range(0,self.data_samples-1).repeat(n_repeats).long()
if self.shuffle:
self.sample_idx_array = self.sample_idx_array[torch.randperm(len(self.sample_idx_array)]
return self.sample_idx_array
def __iter__(self):
return iter(self.gen_sample_array())
def __len__(self):
return self.desired_samples
Hope that helps |
st100855 | Interesting.
For 3. I augmented data at the DataSet level by rolling over the index
class AugmentedAmazonDataset(Dataset):
"""Dataset wrapping images and target labels for Kaggle - Planet Amazon from Space competition.
This dataset is augmented
Arguments:
A CSV file path
Path to image folder
Extension of images
"""
def __init__(self, csv_path, img_path, img_ext, transform=None):
tmp_df = pd.read_csv(csv_path)
assert tmp_df['image_name'].apply(lambda x: os.path.isfile(img_path + x + img_ext)).all(), \
"Some images referenced in the CSV file were not found"
self.mlb = MultiLabelBinarizer()
self.img_path = img_path
self.img_ext = img_ext
self.transform = transform
self.X_train = tmp_df['image_name']
self.y_train = self.mlb.fit_transform(tmp_df['tags'].str.split()).astype(np.float32)
self.augmentNumber = 14 # TODO, do something about this harcoded value
def __getitem__(self, index):
real_length = self.real_length()
real_index = index % real_length
img = Image.open(self.img_path + self.X_train[real_index] + self.img_ext)
img = img.convert('RGB')
## Augmentation code
if 0 <= index < real_length:
pass
### Mirroring and Rotating
elif real_length <= index < 2 * real_length:
img = img.transpose(FLIP_LEFT_RIGHT)
elif 2 * real_length <= index < 3 * real_length:
img = img.transpose(FLIP_TOP_BOTTOM)
elif 3 * real_length <= index < 4 * real_length:
img = img.transpose(ROTATE_90)
elif 4 * real_length <= index < 5 * real_length:
img = img.transpose(ROTATE_180)
elif 5 * real_length <= index < 6 * real_length:
img = img.transpose(ROTATE_270)
### Color balance
elif 6 * real_length <= index < 7 * real_length:
img = Color(img).enhance(0.95)
elif 7 * real_length <= index < 8 * real_length:
img = Color(img).enhance(1.05)
## Contrast
elif 8 * real_length <= index < 9 * real_length:
img = Contrast(img).enhance(0.95)
elif 9 * real_length <= index < 10 * real_length:
img = Contrast(img).enhance(1.05)
## Brightness
elif 10 * real_length <= index < 11 * real_length:
img = Brightness(img).enhance(0.95)
elif 11 * real_length <= index < 12 * real_length:
img = Brightness(img).enhance(1.05)
## Sharpness
elif 12 * real_length <= index < 13 * real_length:
img = Sharpness(img).enhance(0.95)
elif 13 * real_length <= index < 14 * real_length:
img = Sharpness(img).enhance(1.05)
else:
raise IndexError("Index out of bounds")
if self.transform is not None:
img = self.transform(img)
label = from_numpy(self.y_train[real_index])
return img, label
def __len__(self):
return len(self.X_train.index) * self.augmentNumber
def real_length(self):
return len(self.X_train.index)
For the point 2: train_valid_split I use the following routines:
def augmented_train_valid_split(dataset, test_size = 0.25, shuffle = False, random_seed = 0):
""" Return a list of splitted indices from a DataSet.
Indices can be used with DataLoader to build a train and validation set.
Arguments:
A Dataset
A test_size, as a float between 0 and 1 (percentage split) or as an int (fixed number split)
Shuffling True or False
Random seed
"""
length = dataset.real_length()
indices = list(range(1,length))
if shuffle == True:
random.seed(random_seed)
random.shuffle(indices)
if type(test_size) is float:
split = floor(test_size * length)
elif type(test_size) is int:
split = test_size
else:
raise ValueError('%s should be an int or a float' % str)
return indices[split:], indices[:split]
So my full loading code is:
# Loading the dataset
# ds_transform = transforms.Compose([transforms.RandomCrop(224),transforms.ToTensor()])
ds_transform = transforms.ToTensor()
X_train = AugmentedAmazonDataset('./data/train.csv','./data/train-jpg/','.jpg',
ds_transform
)
# Creating a validation split
train_idx, valid_idx = augmented_train_valid_split(X_train, 15000)
nb_augment = X_train.augmentNumber
augmented_train_idx = [i * nb_augment + idx for idx in train_idx for i in range(0,nb_augment)]
train_sampler = SubsetRandomSampler(augmented_train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# Both dataloader loads from the same dataset but with different indices
train_loader = DataLoader(X_train,
batch_size=32,
sampler=train_sampler,
num_workers=4,
pin_memory=True)
valid_loader = DataLoader(X_train,
batch_size=64,
sampler=valid_sampler,
num_workers=1,
pin_memory=True) |
st100856 | yeah I see! Very cool… I’d say everything you’re doing can easily be done using custom Transforms
(a transform to load the image from file, one to mirror/reflect/etc, ToTensor to convert from numpy) with the standard TensorDataset, but that definitely works! It would be nice to have this kind of stuff built-in. |
st100857 | Regarding the mirror/reflect, today it can only “replace” the image at the index, not augment right? It would have to change the length returned by _len_ and the indexing code to work.
Actually maybe custom image transformers could take an argument action with value augment or replace so that if you use augment on a 10000 images dataset you virtually have 20000 images, and replace to get the current behaviour. |
st100858 | That’s not right. The transforms modify the image randomly and not in-place, so that over different epochs you see different versions of your image.
It effectively augments the dataset by a large factor, without having to store them to disk. |
st100859 | I see, it’s more clear now: PyTorch is not doing data augmentation within the same epoch but across multiple epochs.
So indeed I could reimplement all my functions as regular “random transforms” and train on more epochs to achieve similar results.
Then in that case I rework my train/validation routines and use
a train DataSet with transforms.
a validation DataSet with only ToTensor and needed scaling/cropping.
Both pointing to the same CSV source.
Split the indices with a reworked train_valid_split function.
Similar to what I posted use 2 SubsetRandomSamplers and 2 DataLoaders on their respective DataSets. |
st100860 | HI, thanks for the code sample, I used it as a starting point for my Noyebooke and added those features:
Train test split, like sk- learn train_test_split
Visualization of the training and validation sets
Code runs automatically on the GPU if such a device exists and the CPU
Now I have several questions:
If this is a multi-label classification problem, why are you returning F.sigmoid(x) in your Net() class and not 'F.log_softmax(x)` ?
Shouldnt you be using MultiLabelMarginLoss or MultiLabelSoftMarginLoss as the loss function instead of F.binary_cross_entropy?
Thanks, |
st100861 | I’ve published my PyTorch code for Amazon competition on GitHub 127.
You will find:
Using weighted loss function 64
Logging your experiment 50
Composing data augmentations 46, also here 17. Note use Pillow-SIMD 24 instead of PIL/Pillow. It is even faster than OpenCV
Loading from a CSV that contains image path - 61 lines yeah 44
Equivalent in Keras - 216 lines ugh 16. Note: so much lines were needed because by default in Keras you either have the data augmentation with ImageDataGenerator or lazy loading of images with “flow_from_directory” and there is no flow_from_csv
Model finetuning with custom PyCaffe weights 7
Train_test_split, PyTorch version 47 and Keras version 9
Weighted sampling training so that the model view rare cases more often 38
Custom Sampler creation, example for the balanced sampler 58
Saving snapshots each epoch 20
Loading the best snapshot for prediction 10
Failed Word embeddings experiments 3 to combine image and text data 1
Combined weighted loss function (softmax for unique weather tags, BCE for multilabel tags) 6
Selecting the best F2-threshold 12 via stochastic search at the end of each epoch to maximize validation score 3. This is then saved along model parameter.
CNN-RNN combination (work in progress) 25 |
st100862 | Yes it’s a multilabel problem
MultiLabelSoftMarginLoss is F.binary_cross_entropy(F.sigmoid(x))
By the way, that was my first Net/project in PyTorch, and I was just using various tutorials available at the time as reference (which were not using MultiLabelSoftMarginLoss for sure).
I’ve come a long way during this project, be sure to check the full code I linked in my previous post. |
st100863 | Thankyou for your great code!
Recently I am solving a problem about multilabel classify.And i have more lable(500) and is binary and one hot,but the 0 is more than 1, maybe about 50 times?
And i notice that you have use the weight loss and seem that the weight is relevent to the frequency?
Can you give me more details or ideas?
Thankyou best regards!
Screenshot_20180807-092651__01.jpg1080×2280 373 KB |
st100864 | Basically I gave more weights to things that the model saw less, and also the “cloudy” label specifically. |
st100865 | Hi, I want to know that, is there any difference between torch.add(A+B) and A+B, where A and B are tensor, or in some cases there is some difference between torch.add(A+B) and A+B. |
st100866 | Firstly, it is not torch.add(A+B). It should be torch.add(A,B). As far as I know there is no difference between them. You said in some cases there are difference. can you state those cases, so that we can check. |
st100867 | Thank you for your reply. I am sorry, “in some cases there is some difference between torch.add(A+B) and A+B” this sentence should be a question, not assertive sentence. |
st100868 | There is no difference between them.
check out -
import torch
A = torch.randint(0,10,(1,5))
B = torch.randint(0,10,(1,5))
print(torch.add(A,B))
print(A+B) |
st100869 | I am really confused to know whether tensor flow or pytorch is better in handling 3d pointclouds for classification or semantic segmentation purposes. I have just started using pytorch and could find it is mainly for images or 2d data. Couldnot find examples for 3d in pytorch. What is the advantage of using pytorch for 3d instead of tensorflow? Can anyone clarify this please? |
st100870 | Hi,
I’m training a network. After plotting the time used per iteration, I found it goes linearly with number of iterations. There is no accumulation in architecture. What could be possible reasons? Thank you for sharing your experience |
st100871 | Are you training on GPU?
If yes you would have to call torch.cuda.synchronize() before you start timing and once before each time.time() since cuda calls are asynchronous. |
st100872 | After the first epoch training, I am using del and gc.collect() on DataLoader, model, optimizer, everything except paths, and config, but the RAM is 6GB occupied after the first epoch. Then I am loading the model and creating dev and test DataLoaders to evaluate the model (7GB after), and then I delete test and dev DataLoaders, and RAM falls from 7GB to 6 GB. Then again it is growing during second epoch training and eventually, RAM is full.
I can’t upload the code (nevertheless it’s a quite huge project).
I’m sure no append() is used anywhere in my code (even if, it should be freed after gc.collect()?). I’m using .item() to get loss value.
Where can I look for the problems? |
st100873 | gc.collect() shouldn’t be necessary to avoid running out of memory.
Could you check your data loading functions, if unnecessary data samples are stored somehow?
Are you saving the model parameters somewhere, e.g. to restore the model for the best epoch? |
st100874 | I am saving the model to the disk every epoch. I am using python 2.7 so I’m doing it in the following way.
model_n_b = io.BytesIO()
torch.save(model.state_dict(), model_n_b)
with open(model_name, "wb") as f:
f.write(model_n_b.getvalue())
model_n_b.close()
which I’m not sure is correct, however, without saving (and without later deleting and loading the model object) the leak still occurs.
If I run only data loading, like
for batch in loader:
pass
# loss = model.loss()
...
there’s no leak |
st100875 | Thanks for the info.
Does the memory increase for the full training loop?
If so, I would try to debug where it occurs and maybe you can narrow it down to a specific line, e.g. the criterion or backward pass.
Then maybe you could create a small executable code snippet and we could have a look. |
st100876 | I am using: torch.nn.functional.grid_sample on the GPU however, I would like to use this operation on the CPU as well. Is there any way to do this? |
st100877 | I need to get a diagonal stripe of the matrix. Say, I have a matrix of size KxN, where K and N are arbitrary sizes and K>N. Given a matrix:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
From it I would need to extract a diagonal stripe, in this case, a matrix MxV size that is created by truncating the original one:
[[ 0 x x]
[ 3 4 x]
[ x 7 8]
[ x x 11]]
So the result matrix is:
[[ 0 4 8]
[ 3 7 11]]
Now, an additional problem that I face is that I have a tensor of, say, size [150, 182, 91], the first part is just the batch size while the matrix I am interested in is the 182x91 one. (sizes here are just examples)
I need to run a function on the 182x91 matrix for each of the 50 dimensions separately.
Well, I have a solution to this actually, but the code makes my whole model at least 20 times slower (credits for the code to layog from StackOverflow 2). Here is the piece of code that would do it in Pytorch 0.4 (cuda 9.1):
In [1]: import torch
In [2]: def stripe(a):
...: i, j = a.size()
...: assert(i > j)
...: out = torch.zeros((i - j, j))
# this is probably the bottleneck part
...: for diag in range(0, i - j):
...: out[diag] = torch.diag(a, -diag)
...: return out
In [3]: a = torch.randn((182, 91)).cuda()
In [5]: output = stripe(a)
In [6]: output.size()
Out[6]: torch.Size([91, 91])
In [7]: a = torch.randn((150, 182, 91))
# we map the stripe function over the first dimension of the tensor using torch.unbind
# this is a potential bottleneck
In [8]: output = list(map(stripe, torch.unbind(a, 0)))
In [9]: output = torch.stack(output, 0)
In [10]: output.size()
Out[10]: torch.Size([150, 91, 91])
Any potential for optimization here?
For reference, if that helps, here is the same implementation done in numpy:
>>> import numpy as np
>>>
>>> def stripe(a):
... a = np.asanyarray(a)
... i, j = a.shape
... assert i >= j
... k, l = a.strides
... return np.lib.stride_tricks.as_strided(a, (i-j+1, j), (k, k+l))
...
>>> a = np.arange(24).reshape(6, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]])
>>> stripe(a)
array([[ 0, 5, 10, 15],
[ 4, 9, 14, 19],
[ 8, 13, 18, 23]]) |
st100878 | Solved by sytrus-pytorch in post #17
No sure why you would like to set the argument dim=3 when the tensor a is of dimension 3 thus only has dimensions 0,1,2. Maybe this is what you want?
import torch
def flip(x, dim):
indices = [slice(None)] * x.dim()
indices[dim] = torch.arange(x.size(dim) - 1, -1, -1,
… |
st100879 | Hi,
You can use the torch.as_strided() function that has the same definition as the numpy one.
So your stripe implementation should work for torch.tensors as well. |
st100880 | I would use it instead of the for loop in stripe? How do I get the strides before that then?
Thank you! |
st100881 | To get the stride, use the stride function: t.stride().
In your original comment, you said that the numpy implementation you provide is what you want. You can implement this exact function in pytorch by using t.size() (to replace a.shape), t.stride() (to replace a.strides) and torch.as_strided() (to replace np.lib.stride_tricks.as_strided()). Then you can use it the same way you would use th numpy version. |
st100882 | Thank you. I have changed the stripe function and it returns a proper stripe:
def stripe(a):
i, j = a.size()
assert i >= j
k, l = a.stride()
return torch.as_strided(a, (i - j, j), (k, k+1))
a = torch.randn((182, 91)).cuda()
output = stripe(a)
# output.size()
# torch.Size([91, 91])
a = torch.randn((150, 182, 91))
output = list(map(stripe, torch.unbind(a, 0)))
output = torch.stack(output, 0)
# output.size()
# torch.Size([150, 91, 91])
Now I am facing some obscure PyTorch 0.4 error after using that stripe function when the model computes backwards loss:
Traceback (most recent call last):
File "runner.py", line 305, in <module>
main()
File "runner.py", line 249, in main
loss = model.update(batch)
File "J:\PyCharmProjects\tac-self-attention\model\rnn.py", line 67, in update
loss.backward()
File "J:\Anaconda_Python3_6\envs\cuda2\lib\site-packages\torch\tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "J:\Anaconda_Python3_6\envs\cuda2\lib\site-packages\torch\autograd\__init__.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Tensor: invalid storage offset at c:\programdata\miniconda3\conda-bld\pytorch_1524546371102\work\aten\src\thc\generic/THCTensor.c:759
(cuda2)
This doesn’t occur when using the version with .diag(). Any ideas what causes it? |
st100883 | Hi,
I’m afraid the joint use of unbind, your stripe function and stack makes it so that some storage offset becomes negative.
Why not use a batch stripe version instead (that should be much faster as well) ?
def batch_stripe(a):
b, i, j = a.size()
assert i >= j
b_s, k, l = a.stride()
return torch.as_strided(a, (b, i - j, j), (b_s, k, k+1)) |
st100884 | Yes, exactly what I was looking for from the very beginning. Thanks a lot! Using your batch_stripe() function brings back the fast training speed I had before using stack and diag. |
st100885 | I’ve got another question, is it possible instead of getting:
[[ 0 x x]
[ 3 4 x]
[ x 7 8]
[ x x 11]]
To get a reverse stripe like this:
[[ x x 2]
[ x 4 5]
[ 6 7 x]
[ 9 x x]]
Can’t figure out how to properly modify the function above. Thank you! |
st100886 | I guess you could by shifting the beginning of a by linearizing the last two dimensions + narrow. Then set the strides to (k, k-1) for the last two dimensions. |
st100887 | Thank you, could you elaborate more? I don’t understand how that would be done.
I have a potential implementation in numpy (but I guess different to what you suggested):
>>> def reverse_stripe(a):
... a = np.asanyarray(a)
... i, j = a.shape
... assert i >= j
... k, l = a.strides
... return np.lib.stride_tricks.as_strided(a[j:], (i-j, j), (k, l-k))
...
>>> a = np.arange(24).reshape(6, 4)
>>> reverse_stripe(a)
array([[12, 9, 6, 3],
[16, 13, 10, 7],
[20, 17, 14, 11]])
If I take that directly to pytorch with:
def batch_stripe(a):
b, i, j = a.size()
assert i > j
b_s, k, l = a.stride()
# left top to right bottom
# return torch.as_strided(a, (b, i - j, j), (b_s, k, k + 1))
# left bottom to right top
return torch.as_strided(a[j:], (b, i-j, j), (b_s, k, l-k))
I am again getting the error from before:
File "J:\Anaconda_Python3_6\envs\cuda2\lib\site-packages\torch\autograd\__init__.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Tensor: invalid storage offset at c:\programdata\miniconda3\conda-bld\pytorch_1524546371102\work\aten\src\thc\generic/THCTensor.c:759
I guess some parameter is not tracked properly again, I guess because I am modifying a? |
st100888 | l-k is going to be negative here no? pytorch does not allow for negative stride. |
st100889 | It seems to run just fine, I get the output I need using l-k, the error occurs later when the loss is computed. I don’t think it’s negative, since the matrix is flipped using a[j:] (not sure about this part). |
st100890 | No solution found yet, anyone has any suggestion on how to get the reverse stripe or how to fix the “unreachable” error in my solution? thank you! |
st100891 | Yea, when I run that as pure numpy I get an error that there are negative strides involved and they are not supported:
def batch_stripe_numpy(a):
# this doesn't work in the current PyTorch version
# since negative strides are not supported
a = a.cpu().detach().numpy()
b, i, j = a.shape
assert i >= j
b_s, k, l = a.strides
strided_result = np.lib.stride_tricks.as_strided(a[j - 1:], (b, i - j, j), (b_s, k, l - k))
return torch.from_numpy(strided_result).type(torch.FloatTensor).to("cuda")
Can you explain what you mean here:
I guess you could by shifting the beginning of a by linearizing the last two dimensions + narrow. Then set the strides to (k, k-1) for the last two dimensions.
I don’t understand your suggestion, maybe a code snippet of what you suggest doing would help? thank you! |
st100892 | Another try to just do it in numpy:
def reverse_stripe(a):
a = a.cpu().detach().numpy()
*sh, i, j = a.shape
assert i > j
*st, k, m = a.strides
strided_result = np.lib.stride_tricks.as_strided(a[..., j - 1:, :], (*sh, i - j + 1, j), (*st, k, m - k))
return torch.from_numpy(strided_result).type(torch.FloatTensor).to("cuda")
Here is the error message:
ValueError: some of the strides of a given numpy array are negative. This is currently not supported, but will be added in future releases. |
st100893 | I see that there is a way to flip the matrix, which would allow me to use the original implementation. The flip function is defined here https://github.com/pytorch/pytorch/issues/229: 6
def flip(x, dim):
dim = x.dim() + dim if dim < 0 else dim
indices = [slice(None)] * x.dim()
indices[dim] = torch.arange(x.size(dim) - 1, -1, -1,
dtype=torch.long, device=x.device)
return x[tuple(indices)]
So I can do:
def batch_stripe(a):
b, i, j = a.size()
assert i >= j
b_s, k, l = a.stride()
return torch.as_strided(a, (b, i - j, j), (b_s, k, k+1))
# this simulates numpy stripe(a[..., ::-1, :])[..., ::-1, :] ???
output = flip(batch_stripe(flip(a), 3)), 3)
but when I set .dim to 3 I get this:
RuntimeError: dimension out of range (expected to be in range of [-3, 2], but got 3)
Is dim actually referring to tensor rank in this case in PyTorch? dim=2 seems to work, but I am not sure if that is correct. |
st100894 | No sure why you would like to set the argument dim=3 when the tensor a is of dimension 3 thus only has dimensions 0,1,2. Maybe this is what you want?
import torch
def flip(x, dim):
indices = [slice(None)] * x.dim()
indices[dim] = torch.arange(x.size(dim) - 1, -1, -1,
dtype=torch.long, device=x.device)
return x[tuple(indices)]
def batch_stripe(a):
b, i, j = a.size()
assert i >= j
b_s, k, l = a.stride()
return torch.as_strided(a, (b, i-j+1, j), (b_s, k, k+l))
if __name__ == '__main__':
a = torch.arange(24).view(2,4,3)
print('original tensor:')
print(a)
print('inverse stripe:')
print(batch_stripe(flip(a, -1))) |
st100895 | Have to reopen, since I have some issues with this on PyTorch 0.4.1 and Cuda 9.2 / CuDNN 7.1.4. Previous I ran 0.4.0 with Cuda 9.0 and CuDNN 7.0.5.
Basically, flip is already supported in 0.4.1 so the code looks like the following:
import torch
def batch_stripe(a):
b, i, j = a.size()
assert i >= j
b_s, k, l = a.stride()
return torch.as_strided(a, (b, i-j, j), (b_s, k, k+l))
if __name__ == '__main__':
a = torch.arange(24).view(2,4,3)
print('original tensor:')
print(a)
print('inverse stripe:')
flipped = torch.flip(a, [2])
print(batch_stripe(flipped))
Unfortunately, I am getting the following error in my code:
RuntimeError: cuda runtime error (59) device-side assert triggered at /opt/.../THCTensorCopy.cpp:21
And it is pointing to the line that does the flip, when I use CUDA_LAUNCH_BLOCKING=0. If I use CUDA_LAUNCH_BLOCKING=1, then it points to some unrelated part in the code. I see that this only happens with CuDNN 7.1 and not CuDNN 7.0.
This works fine with the small code example above, but fails in my project. Any ideas? Maybe there is a better way of doing the batch_stripe in PyTorch 0.4.1?
I will try to provide an example that can reproduce the error, but the codebase is too big for that right now. |
st100896 | Got it working, it was some mismatch in dimensions after all. Unfortunately, CUDA error pointed to something unrelated more or less. |
st100897 | I use DataLoder for my segmentation task. I set two image in a batch. one image in a batch may crop to create differement count of patches. So, When the code run default_collection function in DataLoder. I get an error at torch.stack(batch, 0, out=out) ->RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0.
if _use_shared_memory:
# If we're in a background process, concatenate directly into a
# shared memory tensor to avoid an extra copy
numel = sum([x.numel() for x in batch])
storage = batch[0].storage()._new_shared(numel)
out = batch[0].new(storage)
return torch.stack(batch, 0, out=out)
By debug. I found in batch, differement item have different shape. for example, batch[0].shape = (2, 3, 512, 512), batch[1].shape=(4, 3, 512, 512). as image in the batch create 2 patches and 4 patches. so directly stack them in dim0 cause that error.
I consider I need add a newaxies at dim0, to (1, 2, 3, 512, 512), (1,4,3,512,512) by unsqueeze. and then stack and squeeze. But That mean I need rewrite the func default_collection
Is there any sample way? |
st100898 | See collate_fn arg of https://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader 12 |
st100899 | I want to get the padding of conv2d in module.register_forward_hook
However, I get padding and output_padding in dir(conv2d_module), what is the difference between them? Which one should I use?
dir(module) # conv2d
['__call__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__format__', '__getattr__',
'__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__r
epr__', '__setattr__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_all
_buffers', '_apply', '_backend', '_backward_hooks', '_buffers', '_forward_hooks', '_forward_pre_hooks',
'_get_name', '_load_from_state_dict', '_modules', '_parameters', '_slow_forward', '_tracing_name', '_ver
sion', 'add_module', 'apply', 'bias', 'children', 'cpu', 'cuda', 'dilation', 'double', 'dump_patches', '
eval', 'extra_repr', 'float', 'forward', 'groups', 'half', 'in_channels', 'kernel_size', 'load_state_dic
t', 'modules', 'named_children', 'named_modules', 'named_parameters', 'out_channels', 'output_padding',
'padding', 'parameters', 'register_backward_hook', 'register_buffer', 'register_forward_hook', 'register
_forward_pre_hook', 'register_parameter', 'reset_parameters', 'share_memory', 'state_dict', 'stride', 't
o', 'train', 'training', 'transposed', 'type', 'weight', 'zero_grad'] |