id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st100500 | Then I import it says before error this:
/home/dex/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py:118: UserWarning:
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) may be ABI-incompatible with PyTorch!
Please use a compiler that is ABI-compatible with GCC 4.9 and above.
See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.
See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6
for instructions on how to install GCC 4.9 or higher.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))
But following gist trying to install 4.9 and 5.9 same errors
Cant get installed, also on 18.04 but with other error:
d
ex@dexpc:~$ sudo apt-get install software-properties-common
[sudo] password for dex:
Reading package lists... Done
Building dependency tree
Reading state information... Done
software-properties-common is already the newest version (0.96.24.32.4).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
dex@dexpc:~$ sudo add-apt-repository ppa:ubuntu-toolchain-r/test
WARNING:root:could not open file '/etc/apt/sources.list'
WARNING:root:could not open file '/etc/apt/sources.list'
Toolchain test builds; see https://wiki.ubuntu.com/ToolChain
More info: https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test
Press [ENTER] to continue or Ctrl-c to cancel adding it.
WARNING:root:could not open file '/etc/apt/sources.list'
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 http://dl.google.com/linux/chrome/deb stable Release
Hit:3 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease
0% [Release.gpg gpgv 1 189 B] [Connected to download.docker.com (52.222.250.2
Hit:5 http://ppa.launchpad.net/teejee2008/ppa/ubuntu bionic InRelease
Hit:6 https://download.docker.com/linux/ubuntu bionic InRelease
Hit:7 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu bionic InRelease
Reading package lists... Done
dex@dexpc:~$
dex@dexpc:~$ sudo apt-get update
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 http://dl.google.com/linux/chrome/deb stable Release
Hit:3 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease
Hit:4 https://download.docker.com/linux/ubuntu bionic InRelease
Hit:6 http://ppa.launchpad.net/teejee2008/ppa/ubuntu bionic InRelease
Hit:7 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu bionic InRelease
Reading package lists... Done
dex@dexpc:~$ sudo apt-get install gcc-5.9 g++-5.9
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package gcc-5.9
E: Couldn't find any package by glob 'gcc-5.9'
E: Couldn't find any package by regex 'gcc-5.9'
E: Unable to locate package g++-5.9
E: Couldn't find any package by glob 'g++-5.9'
E: Couldn't find any package by regex 'g++-5.9'
dex@dexpc:~$ gcc --version
gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0 |
st100501 | Suppose I have a 2D tensor A and a 1-D tensor C recording the length of each chunk.
For example, C[0]=4, means I want A[0:4]; C[1]=5, means I want A[4:9], so on so forth.
It is easy to manually iterate over the C to find all these chunks.
However, this is very slow.
I am wondering if there is efficient operation that can get these chunks in a faster way?
Thanks for the help ! |
st100502 | You could use split for that:
x = torch.arange(20)
lengths = [4, 5, 11]
x.split(lengths, 0) |
st100503 | Thanks !
This is what I need !
I want to ask a related question.
What if I need to torch.sum() for each chunk, is there any way to avoid the iteration ? |
st100504 | Not that I’m aware of.
However, you could manually split the tensor and sum the chunks:
start_idx = 0
for l in lengths:
print(x.narrow(0, start_idx, l).sum())
start_idx += l
I haven’t timed it, but depending on the tensor shape, you are better off using split and then summing each chunk in a loop. |
st100505 | Thanks !
I have tried manually and split, manually is much slower than split and sum.
It seems we can not avoid the iteration. |
st100506 | Hi, is there something similar to ConstantPad2d but for the inner border, in a way that the image is still bordered with a constant value but its shape stays the same?
I’m asking because I have a batch of n images and I want to color 2 of them in white and the rest in black. So I used ConstantPad2d to have a black outer border for all images. Now I want to select the two images that I’ll replace their border color in white, I found a solution by looping over indexes or using numpy\matlab slicing convention, but I was wondering if there’s a more “torch” way to do it. |
st100507 | Solved by SimonW in post #4
I meant just doing slicing yourself, e.g., F.pad(x[:, :, 2:-2, 2:-2], 2, 'constant', -1) |
st100508 | What do you mean by crop? I’m using ConstantPad2d to pad my tensors, and CenterCrop is for PIL images, not tensors |
st100509 | I meant just doing slicing yourself, e.g., F.pad(x[:, :, 2:-2, 2:-2], 2, 'constant', -1) |
st100510 | I want to build an LSTM model which takes a state S0 as the input and then the output is a sequence of S1, S2, … Sn. The length of the output sequence is variable. Three more points:
1- Each state in the sequence depends on the previous one. So, let’s say S1 leads to S2, then S2 leads to S3 and at some point, it should be a possibility to make a decision to stop for example at the Sn state.
Or alternatively, the output sequence can be set to a fixed value and after some point, the state should not change (like Sn leads to Sn again). Which one is easier to implement and how?
2- Ideally, we should be able to start from S2 and get S3 and so on. I guess this behavior is similar to return_sequences =True flag in Keras. Should I train the network on all possible subsequences? or is there a way to learns this only from the main sequence?
3- Each state has a vector with a dimension of 100. The first 20 dimensions (let’s call it ID) are fixed through a sequence (IDs are different from each other however it should stay unchanged during the sequences). How is possible to keep this fixed within LSTM? |
st100511 | PyTorch should make it really easy for you to implement either alternative, but the first technique seems easier. You can simply have a termination condition, and keep generating newer outputs in a sequence until that termination condition is met. You could then compute the loss and backprop.
This really depends on the kind of data you’re using. To me it seems like you could sample subsequences of varying lengths and train on those. (This isn’t really a PyTorch question.)
LSTM hidden states change due to updates from whatever gradient-based learning scheme you end up using. I see no reason why these first 20 dimensions should be part of a hidden state if these are not learnable. It’s not possible to keep these fixed in standard RNN/LSTM implementations. |
st100512 | @krishnamurthy, thanks for the reply.
1- Do you know any sample code that showed how we can incorporate (stop) condition inside the LSTM?
2- Training subsequent is possible. However, I wanted to automatically use the previous output as the input of the next hidden unit.
3- Well, the first 20 dimensions are learnable. In the sequence S = [S0, S1, … , Sn] the first 20 dimensions are fixed, but they are different from the one of sequence S’ = [S’0, S’1, …, S’m]. |
st100513 | I have a seq2seq model, which I want to train in an adversarial setting. At each timestep, I want to do the following:
Obtain a probability distribution over the words of the vocabulary, from the decoder logits.
Sample a word from the aforementioned distribution.
Feed the sampled word, back to the model (Scheduled Sampling)
Feed the sequence of words produced by the decoder to a discriminator.
I need my model to be differentiable end-to-end. In order to “soft-sample” words, I use as word representations, a weighted sum of all the word embeddings, parameterized by the softmax of the logits of decoder from each timestep (like in Goyal - 2017).
Ideally, I want to sample discrete words, but backpropagate as if I have sampled from the softmax of the logits, in order to make my model differentiable. I picked this trick from here: https://github.com/pytorch/pytorch/blob/425ea6b31e433eaf1e4aa874332c1d6176cc6a8a/torch/nn/functional.py#L1025 2
And this is my implementation of what i described:
def softmax_discrete(self, logits, tau=1):
y_soft = F.softmax(logits.squeeze() / tau, dim=1)
shape = logits.size()
_, k = y_soft.max(-1)
y_hard = logits.new_zeros(*shape).scatter_(-1, k.view(-1, 1), 1.0)
y = y_hard - y_soft.detach() + y_soft
return y
def _current_emb(self, logits, tau):
dist = self.softmax_discrete(logits.squeeze(), tau)
e_i = dist.mm(self.embedding.weight)
return e_i.unsqueeze(1)
Is this implementation correct? Since I am computing a wheigted sum of the word embeddings, using a discrete distribution, in which most embeddings are multiplied with zero, will the gradients be non zero (as if the distribution was not discrete), or not?
My model is not performing very well and I would like to rule out this part as one of the reasons.
Thanks! |
st100514 | import rpy2.robjects as robj
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
robj.r["source"]("input_matrix.R")
matrix = robj.r['input_matrix']
HIDDEN_LAYER = 5
LEARNING_RATE = 0.01
data_train = torch.tensor(matrix)
data_train = data_train.view(10,891)
print(data_train.size())
INP_DIM = data_train.size()[0] - 1
class Titanic(nn.Module) :
def __init__(self,d_in,h):
super(Titanic,self).__init__()
self.layer1 = nn.Linear(d_in,h)
self.layer2 = nn.Linear(h,h)
self.layer3 = nn.Linear(h, h)
self.layer4 = nn.Linear(h,d_in)
assert(d_in == data_train.size()[0] - 1)
assert(h == HIDDEN_LAYER)
def forward(self, *input) :
input = F.relu(self.layer1(input))
input = F.relu(self.layer2(input))
input = F.relu(self.layer3(input))
input = self.layer4(input)
return F.log_softmax(input)
net = Titanic(INP_DIM,HIDDEN_LAYER)
print(net)
optimizer = optim.SGD(net.parameters(), lr = LEARNING_RATE)
crit = nn.KLDivLoss()
print(INP_DIM)
for epoch in range(0,data_train.size()[1]) :
vec = data_train[:,epoch]
print(vec)
target = vec[0]
data = vec[1:]
print(data)
target,data = Variable(target),Variable(data)
optimizer.zero_grad()
net_out = net(data) #error on this line |
st100515 | The error comes from the * in your forward method.
Just remove it of use input = F.relu(self.layer1(input[0])).
May I ask for the reason you are passing your input as *input? |
st100516 | Thanks that worked.
I was a bit confused on how the input data is to be passed since i am new to pytorch |
st100517 | Hi,
I’m trying to train a simple model with cats and dogs data set. When I start training on CPU the loss decreased the way it should be, but when I switched to GPU mode LOSS is always zero, I moved model and tensors to GPU like the bellow code but still loss is zero. Any idea ?
import os
import os.path
import csv
import glob
import numpy as np
# import matplotlib.pyplot as plt
# from PIL import Image
#from sklearn.metrics import confusion_matrix
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torchvision
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
#some initial setup
np.set_printoptions(precision=2)
use_gpu = torch.cuda.is_available()
np.random.seed(1234)
#print(use_gpu)
DATA_DIR = "/scratch/amirzaei/pytorch/catvsdog/train/"
DATA_TST_DIR = "/scratch/amirzaei/pytorch/catvsdog/test/"
sz = 224
batch_size = 16
trn_dir = f'{DATA_DIR}'
tst_dir = f'{DATA_DIR}'
tfms = transforms.Compose([
transforms.Resize((sz, sz)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
train_ds = datasets.ImageFolder(trn_dir, transform=tfms)
valid_ds = datasets.ImageFolder(tst_dir, transform=tfms)
test_ds = datasets.ImageFolder(tst_dir, transform=tfms)
train_dl = torch.utils.data.DataLoader(train_ds, batch_size = batch_size, shuffle=True, num_workers=8)
valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size = batch_size, shuffle=True, num_workers=8)
test_dl = torch.utils.data.DataLoader(test_ds, batch_size = 1, shuffle=False, num_workers=1)
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=5, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.fc = nn.Linear(56*56*32, 2)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
model = SimpleCNN()
if use_gpu:
print('yes gpu')
torch.set_default_tensor_type('torch.cuda.FloatTensor')
model = model.cuda()
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(model.parameters(), lr=0.002, momentum=0.9)
num_epochs = 10
losses = []
for epoch in range(num_epochs):
for i, (inputs, targets) in enumerate(train_dl):
model.train()
optimizer.zero_grad()
outputs = model(Variable(inputs.cuda()))
loss = criterion(outputs.cuda(), Variable(targets.cuda()))
losses += [loss.item()]
loss.backward()
optimizer.step()
#report
if ( i+1) % 50 == 0 :
print( 'epoch [%d/%d], step [%d/%d], loss %f' %( epoch, num_epochs, i, len(train_ds) // batch_size, float(loss.item())))
torch.save(model.state_dict(), '/scratch/amirzaei/pytorch/catvsdog/train/SAVED_MODEL.pth')
this is the beginning of the output:
yes gpu
epoch [0/10], step [49/1562], loss 0.000000
epoch [0/10], step [99/1562], loss 0.000000
epoch [0/10], step [149/1562], loss 0.000000
epoch [0/10], step [199/1562], loss 0.000000
epoch [0/10], step [249/1562], loss 0.000000
epoch [0/10], step [299/1562], loss 0.000000
epoch [0/10], step [349/1562], loss 0.000000
epoch [0/10], step [399/1562], loss 0.000000
... |
st100518 | Solved by ptrblck in post #15
If you use datasets.ImageFolder then yes, your images should be located in separate folders which represent the classes.
If you don’t want that, you can easily write your own Dataset and load the images using your own logic.
Here is a good tutorial explaining, how to use Dataset.
One way would be… |
st100519 | Your output should already be on the GPU.
Could you remove the .cuda() call on your output:
criterion(outputs.cuda(), ...)
Also, Variables are deprecated. You don’t have to warp your tensors in Variables anymore. |
st100520 | Yeah, I’ve checked it with a dummy example and it should work.
Could you print the output and target before calling the criterion? |
st100521 | Sorry, I cannot reply real time, because each time I run the code on University’s server and my tasks goes on queue, so I should wait for other to finish first :). |
st100522 | Sure, no problem.
I’ll take another look at the code.
The same code is returning a valid loss using the CPU or did you change something else? |
st100523 | yes gpu
outputs:
tensor([[-0.7218, 0.0799],
[-0.3777, 0.0066],
[-0.0300, 0.1042],
[-0.0631, 0.0776],
[-0.2742, 0.1017],
[-0.1016, 0.3217],
[-0.4512, 0.1652],
[-0.2501, -0.0158],
[-0.1001, 0.0228],
[-0.1450, -0.1840],
[-0.5124, -0.0129],
[-0.3069, 0.0862],
[-0.4056, 0.0122],
[-0.0393, 0.1312],
[-0.1726, -0.0376],
[-0.1504, 0.3080]], grad_fn=<ThAddmmBackward>)
-------------------------
-------------------------
targets
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cpu')
outputs:
tensor([[ -7.3676, 7.1940],
[ -7.8105, 7.8614],
[ -8.4740, 8.1551],
[ -6.6943, 6.3968],
[ -6.9419, 6.5977],
[ -7.8175, 7.5915],
[ -7.2899, 7.3079],
[ -9.4329, 8.9957],
[ -9.8488, 9.2562],
[ -6.5058, 6.3480],
[-10.5865, 10.7625],
[ -6.3048, 6.3257],
[ -7.0993, 6.6395],
[ -5.7157, 5.3780],
[ -4.9567, 4.7297],
[ -7.7228, 7.1255]], grad_fn=<ThAddmmBackward>)
-------------------------
-------------------------
targets
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cpu')
outputs:
tensor([[-11.3238, 11.1269],
[-14.0514, 14.0194],
[ -9.9086, 9.7313],
[-13.4018, 13.1804],
[-15.4153, 15.5052],
[-16.0751, 15.5695],
[-13.1213, 12.6561],
[-21.4735, 21.5242],
[-10.5720, 10.1248],
[-12.9845, 12.7423],
[-16.3938, 15.6749],
[-10.4105, 10.3380],
[-15.3936, 15.1340],
[-16.4181, 16.6434],
[-15.4390, 15.3033],
[-14.1930, 13.9824]], grad_fn=<ThAddmmBackward>)
-------------------------
-------------------------
targets
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cpu')
outputs:
tensor([[-19.4064, 19.3802],
[-12.5685, 12.5996],
[-21.5608, 21.6957],
[-16.2454, 16.4091],
[-23.8241, 23.5606],
[-24.6136, 24.1047],
[-19.3370, 19.2353],
[-16.1822, 16.2280],
[-22.7567, 22.6444],
[-18.3765, 18.0064],
[-20.7338, 20.7988],
[-22.2366, 21.6640],
[-15.4236, 14.9537],
[-21.1272, 21.0056],
[-16.7133, 16.6841],
[-19.2318, 19.0395]], grad_fn=<ThAddmmBackward>)
-------------------------
-------------------------
targets
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cpu')
outputs:
tensor([[-25.9788, 25.7405],
[-32.9146, 32.5946],
[-17.9632, 17.9339],
[-20.4449, 20.5898],
[-24.3951, 24.6153],
[-27.7594, 27.4260],
[-20.1770, 19.8166],
[-21.0899, 20.4372],
[-29.1212, 28.3369],
[-24.3874, 24.0906],
[-24.2173, 24.1907],
[-38.5717, 38.0031],
[-32.8707, 32.0967],
[-19.9623, 19.7503],
[-16.4334, 16.2315],
[-22.4122, 22.3104]], grad_fn=<ThAddmmBackward>)
-------------------------
-------------------------
targets
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cpu')Preformatted text |
st100524 | The first output tensor returns a valid loss, but as you can see the following have really hight logits for the target class (1), which returns a loss approx. 0.
If you use F.log_softmax or F.softmax on the outputs, you see that your model outputs a probability of ~1 for class1. |
st100525 | so you are saying the model is working properly ? I checked for 10 epoch and for all epochs loss is zero, it seems incorrect, isn’t it ? I have shuffle = True, so data set should fetch mixed of cats and dogs images, right ?? |
st100526 | It’s hard to tell as I don’t know your complete use case.
From the output and target you’ve posted, the loss is already 0 in the 3rd iteration.
The targets look strange, as you have all ones in each iteration.
Your model might just learn to output class1, if it’s the majority class.
Could you validate your Dataset to see, how many instances there are of class0 and class1?
Currently you are also using the same directory for train, val and test, which should be unrelated to the current problem. |
st100527 | yes, you were right I did something terrible in the dataset, It will take sometime to correct it, I will put the result again. I have another problem about testing phase, may I asked it here? |
st100528 | Sure! If it’s a separate problem, which might need some discussion, it would probably be better to start a new thread and focus here on the original issue. |
st100529 | When we are using datasets of pytorch, is it necessary to have separated folder for each class ? In my testing phase, I have a folder that all cats and dogs pictures are there, unlike the training and dev folders that they were separated, so when I run the test phase it gives me this error RuntimeError: Found 0 files in subfolders of: /scratch/amirzaei/pytorch/catvsdog/test which apparently it looks for separated folder for cats and dogs. this is the my test module:
import os
import os.path
import csv
import glob
import numpy as np
# import matplotlib.pyplot as plt
# from PIL import Image
#from sklearn.metrics import confusion_matrix
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torchvision
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
#some initial setup
np.set_printoptions(precision=2)
use_gpu = torch.cuda.is_available()
np.random.seed(1234)
#print(use_gpu)
DATA_DIR = "/scratch/amirzaei/pytorch/catvsdog/train/"
DATA_TST_DIR = "/scratch/amirzaei/pytorch/catvsdog/test"
sz = 224
batch_size = 16
trn_dir = f'{DATA_DIR}'
tst_dir = f'{DATA_TST_DIR}'
tfms = transforms.Compose([
transforms.Resize((sz, sz)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
test_ds = datasets.ImageFolder(tst_dir, transform=tfms)
test_dl = torch.utils.data.DataLoader(test_ds, batch_size = 1, shuffle=False, num_workers=1, pin_memory=False)
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=5, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.fc = nn.Linear(56*56*32, 2)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
def test( model, test_loader):
model.cuda()
model.eval()
csv_map = {}
# switch to evaluate mode
filepath = 1
model.eval()
for i, (images, _ ) in enumerate(test_loader):
# pop extension, treat as id to map
image_var = torch.autograd.Variable(images.cuda(), volatile=True)
y_pred = model(image_var.cuda())
# get the index of the max log-probability
smax = nn.Softmax()
smax_out = smax(y_pred)[0]
cat_prob = smax_out.data[0]
dog_prob = smax_out.data[1]
prob = dog_prob
if cat_prob > dog_prob:
prob = 1 - cat_prob
prob = np.around(prob, decimals=4)
prob = np.clip(prob, .0001, .999)
csv_map[filepath] = float(prob.data[0])
filepath += 1
# print("{},{}".format(filepath, prob[0]))
with open('{}entry.csv'.format(DATA_TST_DIR),'w') as csvfile:
fieldnames = ['id', 'label']
csv_w = csv.writer(csvfile)
csv_w.writerow(('id', 'label'))
for row in sorted(csv_map.items()):
csv_w.writerow(row)
return
model = SimpleCNN()
model.load_state_dict(torch.load('/scratch/amirzaei/pytorch/catvsdog/SAVED_MODEL.pth'))
test(model, test_dl) |
st100530 | If you use datasets.ImageFolder then yes, your images should be located in separate folders which represent the classes.
If you don’t want that, you can easily write your own Dataset and load the images using your own logic.
Here 8 is a good tutorial explaining, how to use Dataset.
One way would be to get all image paths with their label, e.g. by using their name, split them into train and val, and pass them to your Dataset. |
st100531 | ptrblck:
ImageFolder
the problem is, the file name in test directory are numbers. I got this DataSet from Kaggle, and it is not easy to separate test data set into two category. I think the first solution you offered is more suitable in this case |
st100532 | I want to compute the vertical/horizontal gradient of a matrix/ tensor in PyTorch, just like a numpy function - np.gradient (https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.gradient.html 19).
Say, the matrix is $G$, $\Delta_w G$ and $\Delta_h G$ are needed. Does any built-in function in PyTorch work like these? Or how can I implement it?
Thanks a lot. |
st100533 | Hi !
I am trying to make my custom C++ extension for pytorch.
Intro:
C++ file look like this:
#include <torch/torch.h>
#include <opencv2/cudaarithm.hpp>
#include <opencv2/cudaimgproc.hpp>
//CODE ...
cv::pyrDown(in, out, cv::Size(X, Y));
//CODE ...
setup.py look like this:
from setuptools import setup
from torch.utils.cpp_extension import CppExtension, BuildExtension
setup(name='lltm',
ext_modules=[
CppExtension(
'gaussianpyramid',
['src/pyramid.cpp'],
extra_compile_args=['-lopencv_calib3d', '-lopencv_contrib', '-lopencv_core', '-lopencv_features2d',
'-lopencv_flann', '-lopencv_highgui', '-lopencv_imgproc', '-lopencv_legacy',
'-lopencv_ml', '-lopencv_objdetect', '-lopencv_photo', '-lopencv_stitching',
'-lopencv_superres', '-lopencv_ts', '-lopencv_video', '-lopencv_videostab'],
)],
cmdclass={'build_ext': BuildExtension},
)
The problem:
After compilation, if in a python script I have:
import MY_LIB
It stop by an error: undefined symbol: _ZN2cv7pyrDownERKNS_11_InputArrayERKNS_12_OutputArrayERKNS_5Size_IiEEi
Info:
OpenCV library are in my LD_LIBRARY_PATH
ldconfig -N -v $(sed 's/:/ /g' <<< $LD_LIBRARY_PATH) | grep opencv find them
This symbol is inside the opencv library (I have check this with a grep on the library)
Another error (and I think it come from there, is than if I do a ldd my_lib.so (generated by the setup.py), I don’t have the opencv lib, only:
linux-vdso.so.1 => (0x00007ffc46dc4000)
libstdc++.so.6 => XXX (0x00007fbda60e6000)
libm.so.6 => XXX (0x00007fbda5db3000)
libgcc_s.so.1 => XXX (0x00007fbda5ba1000)
libpthread.so.0 => XXX (0x00007fbda5985000)
libc.so.6 => XXX (0x00007fbda55c1000)
/lib64/ld-linux-x86-64.so.2 (0x00005573b895d000)
And no opencv library… |
st100534 | I see in most caffe prototxt that, for conv layers, the lr for the convolution kernel and lr for the bias is different (lr for bias is usually 2x lr for kernel). Does it make sense to assign lr like this? If so how could I do that in pytorch? I know I could use optimizer with different parameter lists, however, it is not so convenient if there are a lot of conv layers and thus I should need to construct a tedious optimizer parameter list. Could I assign lr amplifying ratio when defining the model structure like that in the caffe ? |
st100535 | So when I ran my code around 2 days ago it was taking 1 min per epoch, and when I ran exactly the same code today it was taking around 10 mins per epoch. In fact, it started very fast and after about 30-50% of the epoch is done, it suddenly slows down.
I noticed that my disk space (4tb hdd) had only about 70 gb left. So I cleared out around 150gb of data. After this, the training time again took around 1 min epoch.
Any idea why disk space is so much related to the training time? |
st100536 | Solved by ptrblck in post #2
Based on these claims it’s apparently a valid assumption that your HDD/SSD slows down once it’s almost filled. |
st100537 | Based on these claims 18 it’s apparently a valid assumption that your HDD/SSD slows down once it’s almost filled. |
st100538 | General question. I ran an experiment to test the number of bytes used in sparse matrices versus nn.Linear layers of the same size. I set both nn.Linear layer and sparse matrix to all zeros and found that nn.Linear is twice as large as a sparse matrix. Why is that?
1 x 1,000,000,000 matrix
nn.Linear 8.0 Gb
sparse.FloatTensor 4.0 Gb
You can find my code here 4 |
st100539 | When the model is in model.train(), and if we just keep passing image through model
model.train()
for i, data in enumerate(data_loader):
x,y = data
f = model(x)
Is question the model learning high activation features? |
st100540 | I have a set of 1k images, and I want the model to look at the particular subject which is in every single image, there is no such thing as a label. The process is pure unsupervised.
How can I make the model learn the features of the subject seen in every image?
Thank you |
st100541 | How to visualize a network build by Pytorch. I try tensorboard, but sometimes it fails because the support is not so good. Is there some nice tools to visualize the net? |
st100542 | You could try to export your model to ONNX and use this tutorial 32 to visualize it. |
st100543 | The following code dies because of out of memory.
images # Variable of size 2000 * 3 * 224 * 224 (N * C * H * W)
l = vgg.feature_layer # feature layer of vgg (3 * 244 * 244 -> 14 * 14 * 512)
f = l(images) # out of memory!!!!
Batch size (2000) seems to big, so I modified like
f = tr.cat([ l(im) for im in images.split(10)]) # now batch size is 10, still out of memory!!!
I reduced the batch size like above, But still out of memory issue occurs.
How can I avoid it ???
My friend told me to use DataSet and DataLoader.
But I want to find a simple way |
st100544 | It depends if you are training or testing.
In training, the internal layer outputs and other gradient biffers occupy space.
During testing, you could use torch.no_grad() to reduce memory consumption. Possibly try with this. |
st100545 | Hi PyTorchers!
It would sound a bit like nonsense but,
Is it possible to apply half-precision(FP16) to the model, which is trained with full-precision (FP32), for inference only?
or do I need to train new model with half-precision from the scratch? |
st100546 | There is a whole area of model compression to experiment such things.
Have you tried it and saw whats the performance of your model?
I have seen researchers trying half-precision <–> binary. Ofcourse they have their own method training.
But full-precision <–> half precision, I would not expect much difference. But please update here if you have done the experiments. |
st100547 | In the mnist hogwild example 119. Every process loads it’s on dataset using the Dataloader function.
Since the dataset is only read only is there a better way to do it. |
st100548 | What do you mean by every process loads its own dataset?
Can you be a bit elaborate? |
st100549 | I would like to extend/customize the Kernel in a Conv1d layer. Is there any example of how this can be done? |
st100550 | Use the functional form torch.nn.functional.conv1d. https://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv1d 64 |
st100551 | I don’t think that answers my question, I would like to write my own function: g(t - n) in
(f * g) (t) = sum_n f(n)g(t - n)
Sorry if I’m not clear. |
st100552 | Isn’t that just computing the kernel values, i.e., weights passed to the functional conv1d? |
st100553 | Hi,
I arbitrarily get the below error message.
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorCopy.cpp line=20 error=77 : an illegal memory access was encountered
Usually there is no error trace. Just this message. I am using CUDA_LAUNCH_BLOCKING=1. I see there are several posts of similar nature. However, I do not see any definitive solution. I am pretty sure there is no illegal index problem as the code runs fine half the times. This error happens even before the 1st minibatch has processed.
The program does not crash after that but it does not move ahead either.
Any help appreciated. Thanks! |
st100554 | Do you have a small runnable code snippet to reproduce this error or does it occur completely random? |
st100555 | This error was happening arbitrarily till yesterday. Most of my attempts today have failed though. No change to source code during this time.
Re small snippet, not sure what to share with you as I am not sure where this is emerging from yet. Sometimes it happens when I do model.cuda() if torch.cuda.is_available() else model.cpu(), some times from other places. |
st100556 | I’m not sure, why you don’t get a proper error message even with CUDA_LAUNCH_BLOCKING=1.
Usually I check that all batches, especially the targets, have valid values.
Often one unusual batch has e.g. a too high target values for whatever reason, so that NLLLoss will crash.
Since the error is thrown randomly, we could try to remove some random operations, e.g. random transformations, no shuffling, num_workers=0 etc. to narrow down the problem.
However, it’s also strange that the CPU code runs without any problems.
Can you think of some operation, which might mess up your targets or shapes of your input or target? |
st100557 | Thanks @ptrblck. I am posting some code here.
from main.py
# Make the Dataloaders
train_dataloader, val_dataloader = make_dataloader(data_config, data_path)
print_rank("prepared the dataloaders")
# Make the Model
model = make_model(model_config, train_dataloader)
print_rank("prepared the model")
# Make the optimizer
optimizer = make_optimizer(optimizer_config, model)
optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters())
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
from make_model subroutine:
model = Seq2Seq(input_dim=train_dataloader.dataset.input_dim,
vocab_size=train_dataloader.dataset.vocab_size,
model_config=model_config)
print("trying to move the model to GPU")
# Move it to GPU if you can
model.cuda() if torch.cuda.is_available() else model.cpu()
print("moved the model to GPU")
And below is the error message. Unlike previous, this time I got a more descriptive flow of events.
INFO - 2018-09-08T01:01:01.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:01 2018 | rank 0: prepared the dataloaders
INFO - 2018-09-08T01:01:02.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:02 2018 | rank 2: prepared the dataloaders
INFO - 2018-09-08T01:01:02.000Z /container_e2206_1531767901933_63142_01_000116: trying to move the model to GPU
INFO - 2018-09-08T01:01:03.000Z /container_e2206_1531767901933_63142_01_000116: trying to move the model to GPU
INFO - 2018-09-08T01:01:04.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:04 2018 | rank 1: prepared the dataloaders
INFO - 2018-09-08T01:01:05.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:05 2018 | rank 3: prepared the dataloaders
INFO - 2018-09-08T01:01:05.000Z /container_e2206_1531767901933_63142_01_000116: trying to move the model to GPU
INFO - 2018-09-08T01:01:05.000Z /container_e2206_1531767901933_63142_01_000116: moved the model to GPU
INFO - 2018-09-08T01:01:05.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:05 2018 | rank 0: prepared the model
INFO - 2018-09-08T01:01:06.000Z /container_e2206_1531767901933_63142_01_000116: moved the model to GPU
INFO - 2018-09-08T01:01:06.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:06 2018 | rank 2: prepared the model
INFO - 2018-09-08T01:01:07.000Z /container_e2206_1531767901933_63142_01_000116: trying to move the model to GPU
INFO - 2018-09-08T01:01:08.000Z /container_e2206_1531767901933_63142_01_000116: moved the model to GPU
INFO - 2018-09-08T01:01:08.000Z /container_e2206_1531767901933_63142_01_000116: Sat Sep 8 01:01:08 2018 | rank 1: prepared the model
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: THCudaCheck FAIL file=/pytorch/aten/src/THC/THCTensorCopy.cu line=102 error=77 : an illegal memory access was encountered
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: Traceback (most recent call last):
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: File "train.py", line 90, in <module>
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: main(model_path, config, data_path, log_dir)
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: File "train.py", line 35, in main
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: hvd.broadcast_parameters(model.state_dict(), root_rank=0)
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: File "/usr/local/lib/python3.6/dist-packages/horovod/torch/__init__.py", line 158, in broadcast_parameters
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: synchronize(handle)
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: File "/usr/local/lib/python3.6/dist-packages/horovod/torch/mpi_ops.py", line 404, in synchronize
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: mpi_lib.horovod_torch_wait_and_clear(handle)
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: File "/usr/local/lib/python3.6/dist-packages/torch/utils/ffi/__init__.py", line 202, in safe_call
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: result = torch._C._safe_call(*args, **kwargs)
INFO - 2018-09-08T01:01:09.000Z /container_e2206_1531767901933_63142_01_000116: torch.FatalError: Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt
This might be a little to parse. First, please ignore all the container_* strings prepended to each line. Second, notice that rank 3 does not print prepared model. We also see 4 "trying to move the model to GPU" but only 3 "moved the model to GPU" prints. Thus rank 3 was failing at line "model.cuda() if torch.cuda.is_available() else model.cpu()".
Now, what could be causing THIS? there are no indexes involved till now. Just moving a model to GPU. |
st100558 | torch.cuda.set_device(hvd.local_rank()) is how I set device for each rank. hvd.local_rank() is printing the correct GPU number. So Horovod is not to blame here.
What else should I check? What are the cases where moving the model to the GPU would cause "illegal memory access"? |
st100559 | y91:
horovod
Since you are using horovod and mpi, this can be a bug in either horovod, pytorch distributed code or mpi. The first two seem more likely. If you can come up with a more reliable short script, I suggest you post to GitHub repos of those two projects. |
st100560 | Hi, newb so thanks for baring with me:
Python 3.5.2, ubuntu 1804. CUDA 9.2
installed pytorch via
conda install pytorch torchvision cuda92 -c pytorch
trying to load custom datasets (https://www.kaggle.com/mratsim/starting-kit-for-pytorch-deep-learning 20 )
when I try to
from torch import np
I get
“ImportError: cannot import name ‘np’”
have tried reinstalling pytorch (same method) = all packages already installed.
tried updating numpy which didn’t work.
Any tips much, much appreciated.
Cheers |
st100561 | I don’t know where the author of the post got their code, but we don’t have a numpy wrapper for pytorch, as least not officially. If there is, it is not developed or maintained by core devs, and you can’t get it from the official conda package. |
st100562 | The tutorial also looks a bit outdated. I would suggest finding a more modern one. |
st100563 | Hi there, I am doing a model whose input is videos. I use a large batch size (96)(therefore many .jpg files should be loaded, say 12288 pictures) and 32 workers in Dataloader. So I find an interesting phenomenon: that most of the time is spent at the beginning of each epoch (one epoch mean a complete scan of the dataset), like this:
|Epoch: [6][1/11], lr: 0.00100|Time 81.513 (16.666)|Data 79.471 (14.021)|Loss 1.1498 (1.0695)|Prec@1 65.625 (72.222)|Prec@5 89.583 (91.257)|
|---|---|---|---|---|---|
|Epoch: [6][2/11], lr: 0.00100|Time 3.260 (15.635)|Data 1.233 (13.038)|Loss 0.8580 (1.0525)|Prec@1 77.083 (72.613)|Prec@5 90.625 (91.206)|
|Epoch: [6][3/11], lr: 0.00100|Time 2.791 (14.717)|Data 0.000 (12.106)|Loss 0.9509 (1.0449)|Prec@1 69.792 (72.403)|Prec@5 90.625 (91.163)|
|Epoch: [6][4/11], lr: 0.00100|Time 2.658 (13.913)|Data 0.000 (11.299)|Loss 1.3641 (1.0670)|Prec@1 63.542 (71.789)|Prec@5 87.500 (90.909)|
|Epoch: [6][5/11], lr: 0.00100|Time 2.674 (13.211)|Data 0.000 (10.593)|Loss 0.8397 (1.0523)|Prec@1 78.125 (72.200)|Prec@5 92.708 (91.026)|
|Epoch: [6][6/11], lr: 0.00100|Time 2.638 (12.589)|Data 0.000 (9.970)|Loss 0.7121 (1.0316)|Prec@1 82.292 (72.814)|Prec@5 95.833 (91.318)|
|Epoch: [6][7/11], lr: 0.00100|Time 2.647 (12.037)|Data 0.000 (9.416)|Loss 1.0643 (1.0335)|Prec@1 71.875 (72.760)|Prec@5 89.583 (91.219)|
|Epoch: [6][8/11], lr: 0.00100|Time 2.850 (11.553)|Data 0.000 (8.921)|Loss 0.8783 (1.0250)|Prec@1 77.083 (72.994)|Prec@5 92.708 (91.299)|
|Epoch: [6][9/11], lr: 0.00100|Time 2.766 (11.114)|Data 0.000 (8.475)|Loss 1.1610 (1.0320)|Prec@1 68.750 (72.776)|Prec@5 88.542 (91.158)|
|Epoch: [6][10/11], lr: 0.00100|Time 2.663 (10.711)|Data 0.000 (8.071)|Loss 0.9689 (1.0289)|Prec@1 75.000 (72.885)|Prec@5 92.708 (91.233)|
|Epoch: [6][11/11], lr: 0.00100|Time 1.609 (10.298)|Data 0.000 (7.704)|Loss 1.5306 (1.0395)|Prec@1 64.286 (72.705)|Prec@5 83.333 (91.068)|
|Epoch: [7][1/11], lr: 0.00100|Time 67.395 (12.780)|Data 65.243 (10.206)|Loss 1.1724 (1.0455)|Prec@1 66.667 (72.429)|Prec@5 87.500 (90.905)|
|Epoch: [7][2/11], lr: 0.00100|Time 19.035 (13.041)|Data 16.957 (10.487)|Loss 0.9826 (1.0428)|Prec@1 69.792 (72.313)|Prec@5 93.750 (91.029)|
|Epoch: [7][3/11], lr: 0.00100|Time 2.656 (12.625)|Data 0.000 (10.068)|Loss 0.9412 (1.0385)|Prec@1 76.042 (72.469)|Prec@5 93.750 (91.143)|
|Epoch: [7][4/11], lr: 0.00100|Time 2.664 (12.242)|Data 0.000 (9.680)|Loss 1.1080 (1.0413)|Prec@1 77.083 (72.655)|Prec@5 91.667 (91.164)|
|Epoch: [7][5/11], lr: 0.00100|Time 2.791 (11.892)|Data 0.000 (9.322)|Loss 0.9605 (1.0382)|Prec@1 75.000 (72.746)|Prec@5 96.875 (91.385)|
|Epoch: [7][6/11], lr: 0.00100|Time 2.657 (11.562)|Data 0.000 (8.989)|Loss 1.1846 (1.0437)|Prec@1 71.875 (72.713)|Prec@5 85.417 (91.163)|
|Epoch: [7][7/11], lr: 0.00100|Time 2.818 (11.261)|Data 0.000 (8.679)|Loss 0.9796 (1.0414)|Prec@1 69.792 (72.608)|Prec@5 93.750 (91.256)|
|Epoch: [7][8/11], lr: 0.00100|Time 2.671 (10.974)|Data 0.000 (8.390)|Loss 0.9922 (1.0397)|Prec@1 69.792 (72.511)|Prec@5 90.625 (91.234)|
|Epoch: [7][9/11], lr: 0.00100|Time 2.671 (10.707)|Data 0.000 (8.119)|Loss 1.1123 (1.0421)|Prec@1 66.667 (72.315)|Prec@5 93.750 (91.318)|
|Epoch: [7][10/11], lr: 0.00100|Time 2.674 (10.456)|Data 0.000 (7.865)|Loss 1.0186 (1.0413)|Prec@1 73.958 (72.368)|Prec@5 92.708 (91.363)|
|Epoch: [7][11/11], lr: 0.00100|Time 1.597 (10.187)|Data 0.000 (7.627)|Loss 1.2137 (1.0437)|Prec@1 66.667 (72.289)|Prec@5 90.476 (91.351)|
|Epoch: [8][1/11], lr: 0.00100|Time 94.021 (12.653)|Data 91.101 (10.082)|Loss 0.9634 (1.0412)|Prec@1 77.083 (72.437)|Prec@5 92.708 (91.393)|
|Epoch: [8][2/11], lr: 0.00100|Time 10.186 (12.582)|Data 8.128 (10.026)|Loss 0.9580 (1.0387)|Prec@1 78.125 (72.608)|Prec@5 89.583 (91.338)|
|Epoch: [8][3/11], lr: 0.00100|Time 2.751 (12.309)|Data 0.001 (9.748)|Loss 1.0534 (1.0392)|Prec@1 72.917 (72.617)|Prec@5 89.583 (91.287)|
|Epoch: [8][4/11], lr: 0.00100|Time 2.665 (12.049)|Data 0.000 (9.484)|Loss 1.1317 (1.0418)|Prec@1 66.667 (72.448)|Prec@5 92.708 (91.327)|
|Epoch: [8][5/11], lr: 0.00100|Time 2.657 (11.801)|Data 0.000 (9.235)|Loss 1.0215 (1.0412)|Prec@1 70.833 (72.404)|Prec@5 94.792 (91.423)|
|Epoch: [8][6/11], lr: 0.00100|Time 2.814 (11.571)|Data 0.000 (8.998)|Loss 1.2059 (1.0457)|Prec@1 70.833 (72.362)|Prec@5 91.667 (91.429)|
|Epoch: [8][7/11], lr: 0.00100|Time 2.774 (11.351)|Data 0.001 (8.773)|Loss 0.9423 (1.0430)|Prec@1 72.917 (72.376)|Prec@5 90.625 (91.408)|
|Epoch: [8][8/11], lr: 0.00100|Time 2.672 (11.139)|Data 0.000 (8.559)|Loss 0.9794 (1.0413)|Prec@1 66.667 (72.231)|Prec@5 90.625 (91.388)|
|Epoch: [8][9/11], lr: 0.00100|Time 2.666 (10.938)|Data 0.000 (8.355)|Loss 0.9243 (1.0384)|Prec@1 73.958 (72.274)|Prec@5 89.583 (91.344)|
|Epoch: [8][10/11], lr: 0.00100|Time 2.671 (10.745)|Data 0.000 (8.161)|Loss 1.2551 (1.0437)|Prec@1 72.917 (72.289)|Prec@5 90.625 (91.326)|
|Epoch: [8][11/11], lr: 0.00100|Time 1.616 (10.538)|Data 0.000 (7.976)|Loss 0.8046 (1.0412)|Prec@1 66.667 (72.231)|Prec@5 100.000 (91.417)|
As you can see, the training procedure take marginal time, while most of the time spent at the beginning of an epoch.
Is that an expected phenomena? |
st100564 | You are using a lot of workers, it takes a lot of time to bring up 32 workers. How many cores do you have in your CPU? Can you try lowering to maybe 4 workers, and see if the issue persists? |
st100565 | I have 32 cpus in the server. I try num_workers=4,8,16,32,48,96, but in all circumstance the data loading time is still 80s+. |
st100566 | In that case, can you post your data loading function? I guess if you need to call a heavy init() at the beginning of every epoch it could also be the bottleneck (but I’m not sure how your code is defined).
I recommend running cProfiler to really understand what the issue is. |
st100567 | PengZhenghao:
I try num_workers=4,8,16,32,48,96, but in all circumstance the data loading time is still 80s+.
Is your data stored on a ssd or hdd? Are you preloading all samples or just in time when you need them? How large is one image? |
st100568 | What does the 1/11 signify?
How many iterations through your training loop does it take to exhaust the data loader?
Keep in mind that the data loader is going to assign an entire batch to each worker, and with 32 workers you’re going to be hammering the disk pretty hard. Once you have the first batch ready to go, the next 31 should also finish right around the same time. That means that batch two (through 32) should be ready with zero latency, and the first loader process should be working on the 33rd, which is why everything but the first batch is quick.
You can try having two data loaders active at the same time, so that when you get to epoch 2, the second DL already has everything loaded up and ready to go. Something like:
this_dl = DataLoader(..., num_workers=16)
next_dl = DataLoader(..., num_workers=16)
this_dl_iter = iter(this_dl)
next_dl_iter = iter(next_dl)
for epoch in range(epochs):
for batch in this_dl_iter:
# ...training, etc...
this_dl_iter = next_dl_iter
this_dl, next_dl = next_dl, this_dl
next_dl_iter = iter(next_dl)
Note: I’m assuming without checking that trying to iterate over the same DL twice at the same time won’t work. Double check that. |
st100569 | I’ve been trying to call my model from a script in matlab, and it just throws an error whenever I try and call pytorch:
>> system('python -c "import torch; print(torch.__version__)"')
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/askates/anaconda3/envs/torch/lib/python3.6/site-packages/torch/__init__.py", line 78, in <module>
from torch._C import *
RuntimeError: stoi: no conversion
Everything works correctly outside of matlab, but fails when called through system commands. |
st100570 | matlab ships with a libstdc++ that is too old.
see https://github.com/pytorch/pytorch/issues/7082 82 |
st100571 | Looking through that thread, they suggest replacing the MATLAB libstdc++ with the default one (by renaming it). I can’t find any libstdc++ for Matlab, locate libstdc++ lists nothing in a Matlab directory. |
st100572 | It still not work even I make my Matlab (I try in Matlab2016b and Matlab2018a) to use my system libstdc++.
Maybe problem is in 0.4. Pytorch 0.3 is work well with my Matlab and no need to change anything.
my environment:
Ubuntu16.04
Matlab2016b Matlab2018a
PyTorch 0.4
Python 2.7 |
st100573 | When I try to create a tensor using torch.empty() i get the following:
>>> torch.empty(5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/diego/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 57, in __repr__
return torch._tensor_str._str(self)
File "/home/diego/anaconda3/lib/python3.6/site-packages/torch/_tensor_str.py", line 218, in _str
fmt, scale, sz = _number_format(self)
File "/home/diego/anaconda3/lib/python3.6/site-packages/torch/_tensor_str.py", line 96, in _number_format
if value != math.ceil(value.item()):
RuntimeError: Overflow when unpacking long
What am I doing wrong?
PD: I am using Pytorch 0.4.0 |
st100574 | Solved by richard in post #5
And yes, that is correct. You can apply operations but can’t print the value.
This happens because torch.empty initializes your tensor with “un-initialized” data. Some of this data happened to have very, very large float values. The tensor printing code attempts to convert this number to an int (to… |
st100575 | You’re not doing anything wrong! You can carry on and keep using your tensor. This is a tensor printing bug. |
st100576 | So I can apply operations to it, but I can’t print its value? Why does this happens? And is there any workaround. Thank you |
st100577 | There will probably be a new nightly package sometime that will fix this printing bug. |
st100578 | And yes, that is correct. You can apply operations but can’t print the value.
This happens because torch.empty initializes your tensor with “un-initialized” data. Some of this data happened to have very, very large float values. The tensor printing code attempts to convert this number to an int (to see if it would be better printed as an integer), causing an overflow because there is a limit on how large a python int/long can be. |
st100579 | It might be noted that the newbie “beginner-blitz” tutorial leads off with:
x = torch.empty(5, 3)
print(x)
which, of course, fails.
Might be worth tweaking the tutorial, to avoid putting off people checking out the package. |
st100580 | Thanks for the reply.
Yes; starting with a blank notebook:
from future import print_function
import torch
x = torch.empty(5, 3)
print(x)
Generates the following:
AttributeError: module ‘torch’ has no attribute ‘empty’
I got a series of other errors early on in the tutorial as well:
x = torch.zeros(5, 3, dtype=torch.long)
AttributeError: module ‘torch’ has no attribute ‘long’
x = torch.tensor([5.5, 3])
Only works with torch.Tensor([5.5, 3]) (capitalized)
I’m using a fresh installation of PyTorch for a fastai 6 course – have posted there to see if they’re using a modified or old version. They seem really enthusiastic and dedicated to PyTorch (and recommend the tutorial), so I was surprised at the many immediate problems. Any advice much appreciated. |
st100581 | Your PyTorch version is most likely pre 0.4.0.
Could you check it with:
import torch
print(torch.__version__)
I don’t know which version the fast.ai wrapper is using and I’m not sure, if the latest stable release is already implemented.
CC @jphoward |
st100582 | yep, that’s the problem: 0.3.1.post2
will check with them to see if it can be updated
thanks much |
st100583 | Hi,
I trained a neural network model and would like to compute gradients of outputs wrt to its inputs, using following code:
input_var = Variable(torch.from_numpy(X), requires_grad=True).type(torch.FloatTensor)
predicted_Y = model_2.forward(input_var)
predicted_Y.backward(torch.ones_like(predicted_Y), retain_graph=True)
where X is the input data, model_2 is the neural network. But I got None as input_var.grad. I googled it but most issues are related to that requires_grad was not set to True, which is not the case here. I wonder if anyone know what might be the problem?
Thank you! |
st100584 | Solved by SimonW in post #2
Your input_var is an intermediate result, nor a leaf because you created a variable that requires_grad, and then .type creates a new one that is an intermediate result.
Also, your code has some other issues:
You don’t need Variable wrappers anymore. Just write input_var = torch.as_tensor(X, dtyp… |
st100585 | Wei_Chen:
input_var = Variable(torch.from_numpy(X), requires_grad=True).type(torch.FloatTensor)
Your input_var is an intermediate result, nor a leaf because you created a variable that requires_grad, and then .type creates a new one that is an intermediate result.
Also, your code has some other issues:
You don’t need Variable wrappers anymore. Just write input_var = torch.as_tensor(X, dtype=torch.float).
You should use model_2(input_var), rather than directly the forward function. It skips some hooks and can cause incorrect results in certain cases.
retain_graph isn’t needed in your case. |
st100586 | Hey
I’ve noticed expand/expand_as makes a tensor non-contiguous
Is there any other op that might do this?
Thanks
Henry |
st100587 | AFAIK there is
t() transpose
some Select/Slice operations, especially those with stride>1, i.e. tensor[::2]
expand
In such cases, the storage doesn’t change, it only modifies stride(you can get it by tensor.stride()).
is_contiguous is implemented in C.
int THTensor_(isContiguous)(const THTensor *self)
{
long z = 1;
int d;
for(d = self->nDimension-1; d >= 0; d--)
{
if(self->size[d] != 1)
{
if(self->stride[d] == z)
z *= self->size[d];
else
return 0;
}
}
return 1;
} |
st100588 | Thanks,
So how is the contiguous() function implemented then?
Can you point me to the location of the source?
Thanks,
Henry |
st100589 | It just copy the data and make a new tensor. See implementation in TH
https://github.com/pytorch/pytorch/blob/master/torch/lib/TH/generic/THTensor.c#L182-L199 61 |
st100590 | Gotcha, so when using expand should I always do a contiguous() after it or does that only add extra computation? |
st100591 | In fact, I never add contiguous after expand. I often use expand for something like broadcast in numpy, after the broadcast operation, the result should be collect and is contiguous.
import torch as t
a = t.Tensor(3,4)
b = t.Tensor(3)
c = b.unsqueeze(1).expand_as(a) + a
print(c.is_contiguous())
expand is a memory-efficient operation because it won’t copy data in memory(repeat will do ), if you make it contiguous, it will copy data and occupy extra memory.
import torch as t
a = t.Tensor(1,3)
b = a.expand(100000000000,3)
print(id(a.storage())==id(b.storage()))
print(b.storage())
# !!!!dont't do below!!!!!
# Print(b) or b.contiguous() |
st100592 | According to this 64, following operators makes tensor non-contiguous:
narrow() , view() , expand() or transpose() |
st100593 | Is there a mechanism to iterate through the modules of a network in the exact order they’re executed in the forward pass? I’m looking for a solution that does not require the usage of torch.nn.Sequential. |
st100594 | Because forward can be customized by users, there is no general way guaranteed to work on arbitrary networks. |
st100595 | One possibility is to run the forward once, and use jit tracer to trace the graph. But you may get lower-level ops that implement each layer rather than the layers as individual ops (e.g., instance norm may become: reshape + batch_norm + reshape + mul + add) |
st100596 | Thanks for your reply, SimonW. I ended up replacing layers in the network itself. |
st100597 | Hello,
I am only beginning with pytorch and I am stuck with the problem of processing sequences with variable length.
My task is to take two sequences and infer how similar are they.
I have came up with something like this. q1 and q2 are just list of integer word indices.
def SimilarityModule(nn.Module)
def __init__():
self.embedding = nn.Embedding(...)
....
def forward(self, q1, q2):
q1_embeds = self.embeddings(q1)
q2_embeds = self.embeddings(q2)
#mean over whole sequence
q1_repre = torch.mean( q1_embeds, 0 )
q2_repre = torch.mean( q2_embeds, 0 )
dot_product = torch.dot( q1_repre, q2_repre ) / torch.norm( q1_repre, 2 ) / torch.norm( q1_repre, 2 )
return dot_product
I can run the model on one sample:
similarity = model( q1, q2)
I have a question how properly create a batch of the sequences with variable length and feed it to the module? Thank you in advance for help! |
st100598 | nn.Embedding has the option for a padding_idx (check docs).
padding_idx controls the padding index. When you send in q1 or q2 containing this index, zeros are returned and they are ignored for gradient.
So you can send q1 and q2 to be mini-batches padded to the maximum length of your sequence, and padded with the value padding_idx.
after getting q1_embeds and q2_embeds and before the mean operation, you might have to do a torch.gather of specific indices before doing mean, or do the mean operation in a for loop. |
st100599 | Thank you ,
Is there any way to work with “truly” variable length sequences? For instance, I would want to have a CNN with a RNN on top, I would also need masking? |