id
stringlengths
3
8
text
stringlengths
1
115k
st101000
I have a simple problem: I want to transfer the weights of one model to another, such that only the final layers of the model differ, an MLP for example. I am loading the pretrained model with `load_state_dict, but it tries to copy ALL the weights of the pretrained model. How do I copy only the weights that I want, or that are existing in the model (aka, same layer name)?
st101001
Solved by Skinish in post #6 I don’t know why but my previous post disappeared. Therefore, I am posting the solution once again: def load_weights(self): pretrained_dict = torch.load('model.torch') model_dict = self.state_dict() # 1. filter out unnecessary keys pretrained_dict = {k: v for k, v in pretrained_dic…
st101002
You could adapt this 714 code snippet and set your condition to the layer names you would like to copy.
st101003
Looks alright. What is init_weights() doing? Is it just initializing the last two linear layers?
st101004
I don’t know why but my previous post disappeared. Therefore, I am posting the solution once again: def load_weights(self): pretrained_dict = torch.load('model.torch') model_dict = self.state_dict() # 1. filter out unnecessary keys pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} # 2. overwrite entries in the existing state dict model_dict.update(pretrained_dict) # 3. load the new state dict self.load_state_dict(model_dict) # 4. create final layer and initialize it self.linear = nn.Linear(self.output_size, self.n_classes) torch.nn.init.xavier_uniform_(self.linear.weight) self.linear.bias.data.fill_(0.01) self.cuda() # put model on GPU once again
st101005
I need a block or class that consists of multi parallel fc layers or conv layers. Given an input tensor, these parallel layers are to compute with the identical input tensor, and multi-parallel output tensors are expected to be the output. Since the number of parallel layers is large and varies each time I train a model with it, I hope I could pass an argument to assign the number of inner parallel layers. Hope I have made myself clear. How could I construct such a layer ?
st101006
I am interested in using some sort of a cloud platform that offers GPU (>=12 GB) and supports Pytorch. It appears like there are a lot of cloud platforms available, and was wondering if someone has any experience with one of them and would like to share. I will be glad to find a platform that (in addition to “not being the most expensive”) has an available support staff, and allows an easy, and quick start (i.e. a platform that does not require a few hours before the first “run”). Hope someone will be able to help Thanks in advance!
st101007
Hi, I got a error while importing the torch. The error is “ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory” I have installed pytorch with conda (“conda install pytorch torchvision -c pytorch”) The installed packages are : The following packages will be downloaded: package | build ---------------------------|----------------- cudatoolkit-9.2 | 0 351.0 MB The following NEW packages will be INSTALLED: cudatoolkit: 9.2-0 ninja: 1.8.2-py36h6bb024c_1 pytorch: 0.4.1-py36_cuda9.0.176_cudnn7.1.2_1 pytorch torchvision: 0.2.1-py36_1 pytorch The following packages will be UPDATED: conda: 4.5.4-py36_0 --> 4.5.10-py36_0 I got error while importing torch. Can anyone help me.
st101008
Solved by albertwujj in post #3 Fixed! Try conda install -c anaconda cudatoolkit==9.0
st101009
We are fixing this in a few hours for the pytorch binaries. Sorry for the trouble.
st101010
Thank you All. The error was fixed with “conda install -c anaconda cudatoolkit==9.0”
st101011
I have a Power8 machine with 4 Nividia P100. As there is no official binaries for ppc64le architecture, I build from source. However, I get the following error: [ 96%] Building CXX object src/ATen/test/CMakeFiles/basic.dir/basic.cpp.o [ 96%] Building CXX object src/ATen/test/CMakeFiles/scalar_tensor_test.dir/scalar_tensor_test.cpp.o [ 97%] Building CXX object src/ATen/test/CMakeFiles/native_test.dir/native_test.cpp.o [ 97%] Building CXX object src/ATen/test/CMakeFiles/scalar_test.dir/scalar_test.cpp.o [ 98%] Linking CXX executable wrapdim_test [ 98%] Linking CXX executable dlconvertor_test [ 98%] Linking CXX executable undefined_tensor_test [ 98%] Linking CXX executable atest …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/wrapdim_test.dir/build.make:104: recipe for target ‘src/ATen/test/wrapdim_test’ failed make[2]: *** [src/ATen/test/wrapdim_test] Error 1 CMakeFiles/Makefile2:445: recipe for target ‘src/ATen/test/CMakeFiles/wrapdim_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/wrapdim_test.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs… …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status collect2: error: ld returned 1 exit status collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/undefined_tensor_test.dir/build.make:104: recipe for target ‘src/ATen/test/undefined_tensor_test’ failed make[2]: *** [src/ATen/test/undefined_tensor_test] Error 1 src/ATen/test/CMakeFiles/atest.dir/build.make:104: recipe for target ‘src/ATen/test/atest’ failed make[2]: *** [src/ATen/test/atest] Error 1 CMakeFiles/Makefile2:593: recipe for target ‘src/ATen/test/CMakeFiles/undefined_tensor_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/undefined_tensor_test.dir/all] Error 2 CMakeFiles/Makefile2:334: recipe for target ‘src/ATen/test/CMakeFiles/atest.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/atest.dir/all] Error 2 src/ATen/test/CMakeFiles/dlconvertor_test.dir/build.make:104: recipe for target ‘src/ATen/test/dlconvertor_test’ failed make[2]: *** [src/ATen/test/dlconvertor_test] Error 1 CMakeFiles/Makefile2:482: recipe for target ‘src/ATen/test/CMakeFiles/dlconvertor_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/dlconvertor_test.dir/all] Error 2 [100%] Linking CXX executable scalar_tensor_test [100%] Linking CXX executable broadcast_test [100%] Linking CXX executable scalar_test [100%] Linking CXX executable basic …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/broadcast_test.dir/build.make:104: recipe for target ‘src/ATen/test/broadcast_test’ failed make[2]: *** [src/ATen/test/broadcast_test] Error 1 CMakeFiles/Makefile2:408: recipe for target ‘src/ATen/test/CMakeFiles/broadcast_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/broadcast_test.dir/all] Error 2 …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/scalar_tensor_test.dir/build.make:104: recipe for target ‘src/ATen/test/scalar_tensor_test’ failed make[2]: *** [src/ATen/test/scalar_tensor_test] Error 1 CMakeFiles/Makefile2:519: recipe for target ‘src/ATen/test/CMakeFiles/scalar_tensor_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/scalar_tensor_test.dir/all] Error 2 …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/scalar_test.dir/build.make:104: recipe for target ‘src/ATen/test/scalar_test’ failed make[2]: *** [src/ATen/test/scalar_test] Error 1 CMakeFiles/Makefile2:556: recipe for target ‘src/ATen/test/CMakeFiles/scalar_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/scalar_test.dir/all] Error 2 [100%] Linking CXX executable native_test …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/basic.dir/build.make:104: recipe for target ‘src/ATen/test/basic’ failed make[2]: *** [src/ATen/test/basic] Error 1 CMakeFiles/Makefile2:297: recipe for target ‘src/ATen/test/CMakeFiles/basic.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/basic.dir/all] Error 2 …/libATen.so.1: undefined reference to cudnnSetConvolutionGroupCount' ../libATen.so.1: undefined reference tocudnnSetConvolutionMathType’ collect2: error: ld returned 1 exit status src/ATen/test/CMakeFiles/native_test.dir/build.make:104: recipe for target ‘src/ATen/test/native_test’ failed make[2]: *** [src/ATen/test/native_test] Error 1 CMakeFiles/Makefile2:371: recipe for target ‘src/ATen/test/CMakeFiles/native_test.dir/all’ failed make[1]: *** [src/ATen/test/CMakeFiles/native_test.dir/all] Error 2 Makefile:127: recipe for target ‘all’ failed make: *** [all] Error 2
st101012
Solved by SimonW in post #2 Seems that it found CUDNN when compiling but can’t link it. That’s weird. Do you have CUDNN installed?
st101013
Seems that it found CUDNN when compiling but can’t link it. That’s weird. Do you have CUDNN installed?
st101014
This issue may helps you. https://github.com/pytorch/pytorch/issues/3567#issuecomment-342895938 71 conda uninstall cudnn recompiling.
st101015
Thank you very much for your replies. I do not have sudo rights in that machine. However, it is apparently due to cuDNN’s being incompatible with CUDA. I asked admin to uninstall CUDA/cuDNN and reinstalled: CuDA 8.0 and “cuDNN v7.0.5 Library for Linux (Power8)”. Now, PyTorch is compiling w/o errors in Anaconda 2 and working.
st101016
@osuemer could you tell me which version of pytorch you used and also the gcc version? I am also facing troubles compiling pytorch on the IBM power8 machine (with 4 p100 gpus). I get a different error from yours [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTByte.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTLong.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareChar.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedLong.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareTFloat.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathCompareHalf.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMaskedFloat.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCHalf.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THC/generated/ATen_generated_THCTensorMathPointwiseDouble.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_BCECriterion.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_HardTanh.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LeakyReLU.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_ELU.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_LookupTable.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_MultiMarginCriterion.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SmoothL1Criterion.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SoftShrink.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialAveragePooling.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialFractionalMaxPooling.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialSubSampling.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialCrossMapLRN.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_SpatialMaxPooling.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricAdaptiveMaxPooling.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_TemporalReflectionPadding.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_Square.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricGridSamplerBilinear.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricUpSamplingNearest.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/__/THCUNN/ATen_generated_VolumetricDilatedMaxPooling.cu.o [ 52%] Building NVCC (Device) object src/ATen/CMakeFiles/ATen.dir/cuda/detail/ATen_generated_IndexUtils.cu.o sh: 1: Cannot fork CMake Error at ATen_generated_THCTensorConv.cu.o.cmake:207 (message): Error generating /home/sathap1/Software/pytorch/torch/lib/build/aten/src/ATen/CMakeFiles/ATen.dir/__/THC/./ATen_generated_THCTensorConv.cu.o src/ATen/CMakeFiles/ATen.dir/build.make:161: recipe for target 'src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorConv.cu.o' failed make[2]: *** [src/ATen/CMakeFiles/ATen.dir/__/THC/ATen_generated_THCTensorConv.cu.o] Error 1 make[2]: *** Waiting for unfinished jobs.... slurmstepd: error: get_exit_code task 0 died by signal
st101017
GitHub pedrodiamel/configurate-deeplearning-ibm-ppc64le 43 Deep learning server configurate IBM POWER8 (ppc64le) - pedrodiamel/configurate-deeplearning-ibm-ppc64le
st101018
Hi, I want to know, is there function in pytorch, which can solving the linear system of linear equations by iteration after given the initial point? Thank you for your attention.
st101019
Solved by Ranahanocka in post #15 It’s actually from the torch.optim package (you can see all the optimizers here, in addition to SGD, Adam, there’s also LBFGS, and more…) https://pytorch.org/docs/stable/optim.html To use LBFGS, your optimizer would be: optimizer = optim.LBFGS
st101020
Hi Can you give us more details on your optimization problem? Depending on your criteria and constraints, it may be possible to solve with matrices. Otherwise, you to use the optimizer you must do the following. Assuming x is your parameter you want to optimize, and F is the linear function, and y is the desired output. So argmin Loss(F(x) - y). x0 is your initial point. You should build a “network” F which does the linear operations you suggest. Then x = Variable (x0) optimizer = Adam([x], lr=args.lr) optimizer.zero_grad() y_est = F.forward(x) loss = criterion (y_est, y) loss.backward() optimizer.step()
st101021
Thank you for your reply! I want to solve Ax=y with no constraints, and A is symmetric semi-positive-definite matric. Is there some function, I can input A, y and initial value of x, then output the optimal solution of x. Actually, I always can get a approximate solutions (x0) of x, so I want to let x0 be the initial value of x.
st101022
So is A given, or do you also need to solve for A? If it’s given, the optimal solution for x (i.e., x*) w.r.t. to the mean squared error is x* =y times pinv(A). where pinv is the pseudo inverse. Also y is a colum stack matrix of all y, and so is A.
st101023
otherwise, to use some form of gradient descent and assuming that A is fixed/given, you set F (the network) to be the linear layer, and set the F.linear.weight = A.
st101024
Yes, you are write, the A is given and does not need to be optimized. Gradient descent to solve A is OK, but efficiency is very low, because it can not utilize the A is symmetric semi-positive-definite and x0 is approximate to optimal solution. So, is there function, let me input A, y, and x0, to solve it efficiently.
st101025
I wrote the closed form solution above, i.e., Ranahanocka: x* =y times pinv(A)
st101026
Oh, no. This function is so slow. Pay attention to that, what my care is the efficiently. So, I want to apply iter method.
st101027
why can’t you use the iter method I mentioned above? Ranahanocka: You should build a “network” F which does the linear operations you suggest. Then x = Variable (x0) optimizer = Adam([x], lr=args.lr) optimizer.zero_grad() y_est = F.forward(x) loss = criterion (y_est, y) loss.backward() optimizer.step()
st101028
This is because, in this problem, when we can get a nice initial point, iter optimize it by SGD is slow too, because it can only get linear convergence rate, in general, most tool kits use a more efficient iterative approach, which can achieve quadratic convergence rate.
st101029
Can you give an example of a specific toolkit and approach used? What makes them converge at quadratic rate?
st101030
For example, quasi-Newton method, Newton method, Gauss-Seidel approach, and so on.
st101031
Isn’t LFBGS in the family of quasi Newtonian? If that’s good for you, you can just set optimizer to optimize.lfbgs (where I wrote Adam). you can also check all the other possible optimizers in the torch.optim package.
st101032
You mean that I can set Adam to BFGS method? How can I do this? (I seem to see hope!)
st101033
It’s actually from the torch.optim package (you can see all the optimizers here, in addition to SGD, Adam, there’s also LBFGS, and more…) https://pytorch.org/docs/stable/optim.html 33 To use LBFGS, your optimizer would be: optimizer = optim.LBFGS
st101034
I guess you know you don’t need PyTorch for this but I assume from your question you want to use PyTorch. Is that right? If not, you can just use scipy.optimize.minimize(method='BFGS' or 'L-BFGS-B'). Another option is GEKKO 7 but might be overkill for this I’m not sure.
st101035
The dimension of my A is 2000×2000, and y is 2000×1. I need pytorch because it can apply GPU to speed up my optimize.
st101036
Does anyone know if there is the weights for resnet network which is train on image net and finetune on VID available (or does it even exist?) Im specifically looking for resnet 101 weights More info about VID (object classification/detection from video (VID)) can be find here: http://bvisionweb1.cs.unc.edu/ilsvrc2015/download-videos-3j16.php#vid 16
st101037
I have a question to ask. I have a dockerfile witjh : … Install Pytorch Instructions at http://pytorch.org/ RUN conda install -y pytorch torchvision cuda90 -c pytorch RUN conda install -y opencv … I notice that even so, pytorch uses cuda 8 and not cuda 9 (inside the container, whose image was just built) ipython import torch torch.version.cuda . // 8.0.61 outside the container, the same 3 lines do correctly give 9.0.176 How do I get pytorch to install with cuda 8? The setup is AWS p3.8xlarge AMI (Conda) thank you,
st101038
Could you check the log from your docker container to see which version was selected to be installed? Do you also have CUDA8 using python instead of ipython? Could it be multiple PyTorch versions were installed?
st101039
using RUN conda install -y pytorch torchvision cuda92 -c pytorch instead of RUN conda install -y pytorch torchvision cuda90 -c pytorch does get pytorch to switch. For some reason, cuda90 is non-sticky … not sure why … thank you ptrblck, I really appreciate your tips/help.
st101040
I can detect cuda but cannot use it .so weird 微信图片_20180828221612.jpg960×720 113 KB my configuration: 1080ti driver:390.77 cuda9.0 cudnn 7.0.5 py3.5 I am sure i correctily installed cuda cause “nvidia-smi” and “nvcc -V” presents normally ,and tensorflow-gpu works well who can help me!
st101041
I usually use a.to(device) instead of .cuda() where the device is set like this: device = torch.device(“cuda” if args.cuda else “cpu”)
st101042
Hi, Since I am looking into the source code of pytorch, and I don’t want to install that whenerver I modify some of the codes. So, is it possible to run the pytorch without installation as a python package? In the previous version of pytorch, I could do that via the following, cd pytorch mkdir build cmake .. && make -j12 and export the necessary python path export PYTHONPATH=$PYTHONPATH:$HOME/pytorch/build/lib.linux-x86_64-2.7:$HOME/pytorch/build/lib.linux-x86_64-2.7/caffe2 everything is fine if I want to import torch. But in the current version of pytorch, I can only see the caffe2 related library in the build directory, no torch python files can be found. Any advise? thank you
st101043
Solved by be7f984b2f2e66ba7969 in post #2 I Just find a way to do so. python setup.py build after that, everything is in the build directory.
st101044
I Just find a way to do so. python setup.py build after that, everything is in the build directory.
st101045
Hi all, I just started using pytorch. I wonder how could I define a layer that combine convolution with deconvolution. More precisely, for a tensor whose shape is [I_T, I_H, I_W], what I want is do convolution at the second and third dimension, and do deconvolution at the first dimension, so the shape of the output will be [(I_T - 1) * S_T + K_T - 2 * p_T, (I_H + 2 * P_H - K_H) / S_H + 1, (I_W + 2 * P_W - K_W) / S_W + 1].(T、H、W represents three dimension respectively, I means input, s means stride, k means kernel size, p means padding). Meanwhile, I know I can get the output of the same shape as I want by using convolution first, and then use deconvolution, but I want to do both them at the same time, so I have to define a new layer. Any suggestions? If demo code is too complex, could you give me some idea? Any help would be appreciated.
st101046
In the case of an image enhancement application, I would like to normalize each image of the batch independently before entering the network and then inverse normalize the output images using the statistics of their corresponding input image. I tried to use InstanceNorm2d() but I’m not sure to understand how to use it in my case. Any idea ?
st101047
You could calculate the mean and std for each image and channel, normalize the images, perform your forward pass, and finally undo the normalization on the output: batch_size = 10 channels = 3 h, w = 24 ,24 images = torch.randn(batch_size, channels, h, w) im_mean = images.view(batch_size, channels, -1).mean(2).view(batch_size, channels, 1, 1) im_std = images.view(batch_size, channels, -1).std(2).view(batch_size, channels, 1, 1) images_norm = (images - im_mean) / im_std model = nn.Conv2d(3, 3, 3, 1, 1) output = model(images_norm) output = (output * im_std) + im_mean Let me know, if this works for you or if I’ve misunderstood your use case.
st101048
Yeah thank you that’s indeed what I was looking for, except in my use case, the input is composed of two images (so the input size is [batch_size, 2*channels, h, w]) and I need to jointly normalize both images to get one mean and one std per channel. So starting from your piece of code, how do I compute statistics not for each channel separately, but for 2 channels at a time ?
st101049
I am fairly new to using RNNs and pytorch. I study the Generating Names with a Character-Level RNN tutorial and I was wondering how should I change the model to use an LSTMCell. The model in the tutorial is defined as follows: class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size) self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size) self.o2o = nn.Linear(hidden_size + output_size, output_size) self.dropout = nn.Dropout(0.1) self.softmax = nn.LogSoftmax(dim=1) I am thinking to replace self.i2h with the LSTMCell layer: self.lstm = nn.LSTMCell(n_categories + input_size + hidden_size, hidden_size) I am not sure then if self.i2o should be kept or become something like that: self.h2o = nn.Linear(hidden_size, output_size)
st101050
I am installing PyTorch using conda doing the following: conda install pytorch torchvision -c pytorch I have CUDA 9.0 and I have CuDNN 7.0.5 installed into it. I also have an unlinked CUDA 9.2 installation with CuDNN 7.1.4. Somehow when I install PyTorch 0.4.1, it still picks up the cudnn 7.1 version and donwloads the following package: pytorch: 0.4.1-py37_cuda9.0.176_cudnn7.1.2_1 pytorch So, somehow it still pick up on the 7.1 cuddn version and it doesn’t select the 7.0 one. I know for a fact that there is a 7.0 version of PyTorch 0.4.1, since I was able to install it on another PC. Is there a way to force cuddn version somehow during installation?
st101051
temp = torch.empty(x_size) temp.to(device) print('device temp', temp.device) I ran the above code on a gpu server, where device is cuda. But it gives me a cpu tensor. Why?
st101052
You have to assign the tensor: temp = temp.to(device) nn.Modules will be pushed inplace to the specified device, while tensors need the assignment.
st101053
This is more a general question. I observed that PyTorch was taking almost a GB of space under site-packages. Is this normal? It struck me as a little surprising as tensorflow was only about 300 MB.
st101054
I want to ask at caffe2’s github “issues”, which now is apart of pytorch. It suggests me to ask my question here. I’m reading caffe2’s tutorial codes, especially CIFAR-10: Part 1&2. It has some operations related to the object created by TensorProtos(), such as protos.add(), protos.extend(), reshape(). I suspect not all the operations belong to the TensorProtos object. But it seems I can’t find any references to those operations. What are their roles? What are they doing? How can I use them on my own?
st101055
We are trying to have caffe2 compiled on AIX. We were able to successfully compile it and run a few caffe2 sample code. But,a RNN sample script fails with an exception as shown below. Any pointers on the cause of this will be helpful. python char_rnn.py --train_data input.txt WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: No module named caffe2_pybind11_state_hip Input has 65 characters. Total input size: 1115394 DEBUG:char_rnn:Start training Entering interactive debugger. Type “bt” to print the full stacktrace. Type “help” to see command listing. Exception when creating gradient for [LSTMUnit]:[enforce fail at pybind_state.cc:523] caffe2::GradientRegistry()->Has(def.type()). . Op: input: “LSTM/hidden_t_prev” input: “LSTM/cell_t_prev” input: “LSTM/gates_t” input: “seq_lengths” input: “timestep” output: “LSTM/hidden_state” output: “LSTM/cell_state” name: “” type: “LSTMUnit” arg { name: “sequence_lengths” i: 1 } arg { name: “drop_states” i: 0 } arg { name: “forget_bias” f: 0.0 } device_option { device_type: 0 cuda_gpu_id: 0 } /usr/local/lib/python2.7/site-packages/caffe2/python/core.py(1108)GetGradientForOp() -> format(op.type, e, str(op)) (Pdb)
st101056
Just wondering if this interaction with indexing with tensors as opposed to with lists is intended. Lists a = [1,1,1] b = a[1] b -= 1 a will still be [1,1,1] and b will be 0 On the other hand, for tensors, a = torch.ones(3) b = a[1] b -= 1 a will be tensor([ 1., 0., 1.]) and b will be tensor(0.). Is this the intended behavior? If not, how can i modify it such that i copy the tensor as a brand new variable? For python list, indexing is enough to copy it as a new variable
st101057
Solved by ptrblck in post #2 It’s intended behavior. If you want to get a copy, you could use .clone(): a = torch.ones(3) b = a[1].clone() b -= 1
st101058
It’s intended behavior. If you want to get a copy, you could use .clone(): a = torch.ones(3) b = a[1].clone() b -= 1
st101059
Just curious, in resnet, they did not use clone() resnet 2 wouldnt the subsequent operations modify residual after passing x through the layers as well? shouldnt residual be the original input x?
st101060
residual and x won’t be modified by the operations, as new tensors are being created. Have a look at this small example: lin = nn.Linear(10, 2) x = torch.ones(1, 10) residual = x output = lin(x) Both, residual and x have still the same shape and values.
st101061
hello guys, I am currently using this 27 pre-trained code example to train my own dataset. My dataset classes have postures of humans like sitting and standup. I use resnet18 model for this. As limitations, I have very few train images (15 at most) to test it.(will add more obv.) and I changed minor parts of the main.py like changing scale() to resize() because they were deprecated. In the run time it says Prec1,5 not working but code runs anyway. The problem is I used the some of the same images in “val” or compeletely irrelevant ones to observe the loss/acc change. edit: the images have 224x224 px size. This was what I got: (compeletly irrelevant pics like birds in val) Epoch: [89][0/1] Time 2.153 (2.153) Data 2.114 (2.114) Loss 0.6270 (0.6270) Prec@1 72.727 (72.727) Prec@5 100.000 (100.000) Test: [0/1] Time 1.863 (1.863) Loss 0.7864 (0.7864) Prec@1 50.000 (50.000) Prec@5 100.000 (100.000) Prec@1 50.000 Prec@5 100.000 Question is: If I add relevant 1000+ imgs to my “train” and x-amount to “val”. will it have different and proper results? Therefore, can I use it? Or should I scrap it and write my own as it is wrong/deprecated?
st101062
What is the error message stating that “Prec1,5” is not working? How many classes do you currently have?
st101063
=> creating model 'resnet18' main.py:171: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number losses.update(loss.data[0], input.size(0)) main.py:172: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number top1.update(prec1[0], input.size(0)) main.py:173: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number top5.update(prec5[0], input.size(0)) currently I have 2 classes. I havent changed the the code, as it can be seen from the link.
st101064
That’s just a warning for indexing a 0-dim tensor. Just use .item() instead as suggested in the warning: losses.update(loss.item(), input.size(0)) top1.update(prec1.item(), input.size(0)) top5.update(prec5.item(), input.size(0))
st101065
thanks for the fix. However, the reason for that part was to show if the pytorch’s own examples need to be updated on their github page. As i said, it’s still working with those errors. Maybe my question was phrased wrong: I tried to see the test result changes based on the “val” file inputs. I put irrelevant photos like cats to see if the alg fails to classify. so in the end I got this after 89th epoch: Epoch: [89][0/1] Time 2.153 (2.153) Data 2.114 (2.114) Loss 0.6270 (0.6270) Prec@1 72.727 (72.727) Prec@5 100.000 (100.000) Test: [0/1] Time 1.863 (1.863) Loss 0.7864 (0.7864) Prec@1 50.000 (50.000) Prec@5 100.000 (100.000) Prec@1 50.000 Prec@5 100.000 the question was: will the result in prec1 and prec5 change if I add 1000+ images? (indicating that diferent inputs in val folder made no change)
st101066
I’m seeing consistent change in results when I make a trivial change to my code. Original version: class LeNet(nn.Module): def __init__(self, nfilters=32, nclasses=10, linear=128): super(LeNet, self).__init__() self.linear1 = nn.Linear(nfilters*5*5, linear) self.linear2 = nn.Linear(linear, nclasses) self.dropout = nn.Dropout() self.act = nn.ReLU(inplace=True) self.batch_norm = nn.BatchNorm1d(linear) self.first_layers = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=nfilters, kernel_size=5), nn.BatchNorm2d(nfilters), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), self.act, nn.Conv2d(in_channels=nfilters, out_channels=nfilters, kernel_size=5), nn.BatchNorm2d(nfilters), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), self.act, ) def forward(self, x): x = self.first_layers(x) x = x.view(x.size(0), -1) x - self.dropout(x) x = self.linear1(x) x = self.batch_norm(x) x = self.act(x) x = self.dropout(x) x = self.linear2(x) return x Modified version: class LeNet(nn.Module): def __init__(self, nfilters=32, nclasses=10, linear=128): super(LeNet, self).__init__() self.linear1 = nn.Linear(nfilters*5*5, linear) self.linear2 = nn.Linear(linear, nclasses) self.dropout = nn.Dropout() self.act = nn.ReLU(inplace=True) self.batch_norm = nn.BatchNorm1d(linear) self.first_layers = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=nfilters, kernel_size=5), nn.BatchNorm2d(nfilters), nn.MaxPool2d(kernel_size=2, stride=2), self.act, nn.Conv2d(in_channels=nfilters, out_channels=nfilters, kernel_size=5), nn.BatchNorm2d(nfilters), nn.MaxPool2d(kernel_size=2, stride=2, padding=0), self.act, ) self.last_layers = nn.Sequential( self.dropout, self.linear1, self.batch_norm, self.act, self.dropout, self.linear2, ) def forward(self, x): x = self.first_layers(x) x = x.view(x.size(0), -1) x = self.last_layers(x) return x As you can see, the only change is wrapping last layers into nn.Sequential function. Here are the results: First version: Epoch 0 Train: Loss 0.29 Accuracy 93.96 Test: Loss 0.05 Accuracy 98.73 Epoch 1 Train: Loss 0.10 Accuracy 97.05 Test: Loss 0.03 Accuracy 98.90 Epoch 2 Train: Loss 0.08 Accuracy 97.85 Test: Loss 0.03 Accuracy 98.98 Second version (using self.last_layers): Epoch 0 Train: Loss 0.41 Accuracy 90.10 Test: Loss 0.06 Accuracy 98.46 Epoch 1 Train: Loss 0.16 Accuracy 95.42 Test: Loss 0.04 Accuracy 98.69 Epoch 2 Train: Loss 0.12 Accuracy 96.28 Test: Loss 0.04 Accuracy 98.84 As you can see, the first version trains faster, and even though this might appear as a slight difference, it’s consistent, while repeated run of the same code produces almost identical results. I tracked this down to dropout layers, if I comment out dropout layers in self.last_layers and forward, the difference disappears.
st101067
Solved by ptrblck in post #2 I think you have a typo in your first model, as you are not using the Dropout layer in forward: x - self.dropout(x) This should probably be an assignment instead of a subtraction. You will get exactly the same output, if you fix this typo and seed all operations accordingly: torch.manual_seed(28…
st101068
I think you have a typo in your first model, as you are not using the Dropout layer in forward: x - self.dropout(x) This should probably be an assignment instead of a subtraction. You will get exactly the same output, if you fix this typo and seed all operations accordingly: torch.manual_seed(2809) modelA = LeNetA() torch.manual_seed(2809) modelB = LeNetB() x = torch.randn(2, 1, 32, 32) torch.manual_seed(2809) outputA = modelA(x) torch.manual_seed(2809) outputB = modelB(x)
st101069
I think it’s more a general Python/IDE issue. While pylint will catch those lines, other tools don’t seem to care about the missing assignment.
st101070
I run the same input through a set sequential modules and get different results. net = nn.Sequential( nn.Conv2d(50, 30, kernel_size=1, padding=1, bias=False), nn.BatchNorm2d(30, momentum=.95), nn.ReLU(inplace=True), nn.Dropout(0.1), nn.Conv2d(30, 19, kernel_size=1)) x = torch.rand(1, 50, 12, 12) y = net(Variable(x)) modules = [m for m in net.children()] w = Variable(x) for m in modules: w = m(w) torch.equal(w.data, y.data) #False Any ideas where it is going wrong? I am using Python 2.7 and torch version is 0.2.0_3
st101071
Solved by rohun in post #2 It was the Dropout in the middle. Works fine if the Dropout is left out, if say the net is the following - net = nn.Sequential( nn.Conv2d(50, 30, kernel_size=1, padding=1, bias=False), nn.BatchNorm2d(30, momentum=.95), nn.ReLU(inplace=True), nn.Conv2d(30, 19, kernel_size=1)) Need to close this th…
st101072
It was the Dropout in the middle. Works fine if the Dropout is left out, if say the net is the following - net = nn.Sequential( nn.Conv2d(50, 30, kernel_size=1, padding=1, bias=False), nn.BatchNorm2d(30, momentum=.95), nn.ReLU(inplace=True), nn.Conv2d(30, 19, kernel_size=1)) Need to close this thread
st101073
Wait, how is this closed? Clearly this is a bug, or very unexpected undocumented behavior. Can any devs comment on this?
st101074
nn.Dropout uses it’s probability p to drop activations randomly, so it’s expected behavior to get different results using this layer. Have a look at this small example resulting in a new drop mask for every run: drop = nn.Dropout(p=0.5) x = torch.randn(1, 10) for _ in range(10): output = drop(x) print(output)
st101075
Oh, looks like I misunderstood this. I thought this is similar to the issue I’m having: Different results if using nn.Seqential wrapper 22
st101076
Hi, Suppose my vocabulary size is 10,000. At some point, my model emits scores for 3 words that are, let’s say, at vocabulary index 22, 1576, and 9065 respectively. Variable ‘scores’ has a dimension of (1, 3). How can I obtain a log_softmax over the vocabulary size that can be used for NLLLoss? I tried something like the following, but it seems that the gradient is not back propagating word_scores = model.foo() index_to_update = model.bar() i = torch.LongTensor(index_to_update) v = word_scores word_attn_energy = torch.sparse.FloatTensor(i.t(), v, torch.Size([1, OUTPUT_DIM])).to_dense() word_attn_energy.requires_grad = True log_prob = F.log_softmax(word_attn_energy, dim=1) In short, I am looking for something similar to TensorFlow’s tf.nn.sparse_softmax_cross_entropy_with_logits.
st101077
Is there a faster way than accumulating a 2d list (the program is given 1 row at a time), then converting the whole list all at once to a PyTorch Tensor? Is using torch.cat or torch.stack (converting each row to a Tensor first) more efficient? This is to be inputted into a neural network, so the rows should be adjacent in memory.
st101078
I don’t see torch.cat / torch.stack to be more efficient than accumulating into a 2d list and then converting it to a tensor. You could try it and see how it goes, though.
st101079
While training a model when i use multi-gpus i get this error. loss.backward() File "/home/ashwin/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 120, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/ashwin/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 81, in backward variables, grad_variables, retain_graph, create_graph) RuntimeError: arguments are located on different GPUs at /home/ashwin/pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:231 I just use model.cuda() and similarly while loading variables i use variable(<param_name>).cuda() and while training in single gpu it is working fine but while multi-gpu it is giving this error. Any help in this is appreciated . I could also see torch.cuda.device_count() = 4 and torch.cuda.current_device()=0
st101080
I am not sure where exactly this error happens. since it throws error after loss.backward() let me show how i calculate loss inputs = Variable(inputs).cuda() net(inputs, truth_boxes, truth_labels, truth_instances) loss = net.loss(inputs, truth_boxes, truth_labels, truth_instances) I do not use nn.DataParallel to parallelize the whole model instead i parallelize each module components in the entire model using data_parallel function from nn.parallel.data_parallel because some components are c-wrapper which throws error while i use nn.DataParallel on the model
st101081
Could you re-run the code with CUDA_LAUNCH_BLOCKING=1 python script.py? Using this, you should find the exact location, where the error was thrown.
st101082
Maybe it hangs, because of the blocking statement and using data parallel. Sorry, if that’s the case. Could you post a small code snippet reproducing this error? Currently I’m clueless, how to debug your error.
st101083
By setting it as CUDA_VISIBLE_DEVICES=0 it works but it uses only 1 gpu (obviously). If i set CUDA_VISIBLE_DEVICES=0,1,2,3 it throws the error as mentioned in the post.
st101084
I think this is the snippet that is giving the error. If i dont use this snippet then the model works in multi-gpu class RcnnMultiHead(nn.Module): def __init__(self, scales, cfg, in_channels): super(RcnnMultiHead, self).__init__() self.scales = scales self.num_classes = cfg.num_classes self.crop_size = cfg.rcnn_crop_size self.fc1s = nn.ModuleList() self.fc2s = nn.ModuleList() self.logits = nn.ModuleList() self.deltas = nn.ModuleList() for i in range(self.scales): self.fc1s.append(nn.Linear(in_channels * cfg.rcnn_scales[i] * cfg.rcnn_scales[i], 784)) self.fc2s.append(nn.Linear(784, 784)) self.logits.append(nn.Linear(784, self.num_classes)) self.deltas.append(nn.Linear(784, self.num_classes * 4)) def forward(self, crops): logits_flat = [] deltas_flat = [] for i in range(self.scales): crop = crops[i] x = crop.view(crop.size(0), -1) x = F.relu(self.fc1s[i](x), inplace=True) x = F.relu(self.fc2s[i](x), inplace=True) x = F.dropout(x, 0.5, training=self.training) logit = self.logits[i](x) delta = self.deltas[i](x) logits_flat.append(logit) deltas_flat.append(delta) logits_flat = torch.cat(logits_flat, 0) deltas_flat = torch.cat(deltas_flat, 0) return logits_flat, deltas_flat class RcnnFpn(nn.Module): def __init__(self, in_dim): super(RcnnFpn, self).__init__() self.in_dim = in_dim self.c1 = nn.Conv2d(256, 384, kernel_size=3, stride=2, padding=1) self.p1 = nn.Conv2d(512, 256, kernel_size=1, stride=1, padding=0) self.c1_2 = LateralBlock(384, 256, self.in_dim) self.c2 = nn.Conv2d(384, 512, kernel_size=3, stride=2, padding=1) self.c2_2 = LateralBlock(256, self.in_dim, self.in_dim) def forward(self, x): c1_out = F.leaky_relu(self.c1(x)) c2_out = F.leaky_relu(self.c2(c1_out)) p1 = self.p1(c2_out) p2 = self.c1_2(c1_out, p1) p3 = self.c2_2(x, p2) features = [p1, p2, p3] return features class Model(nn.Module): def __init__(self, cfg): super(Model, self).__init__() . . . self.rcnn_crop = CropRoi(cfg, cfg.rcnn_crop_size) self.rcnn_head = RcnnMultiHead(3, cfg, crop_channels) self.rcnn_fpn = RcnnFpn(in_dim=256) def forward(self, x): . . . rcnn_crops = self.rcnn_crop(features, self.rpn_proposals) rcnn_features = data_parallel(self.rcnn_fpn, rcnn_crops) self.rcnn_logits, self.rcnn_deltas = data_parallel(self.rcnn_head, rcnn_features) The self.rcnn_fpn and self.rcnn_head is called by data_parallel function from nn.parallel.data_parallel and it is causing the error.
st101085
I cannot run the code, since some classes are undefined. Could you post a placeholder for LateralBlock and CropRoi? Also could you provide the settings for cfg?
st101086
I believe it is Heng’s implementation of mask-rcnn for Kaggle DS bowl 2018. I met exactly the same error but couldn’t find the root cause. My situation is more weird as the loss.backward() fails occasionally. My assumption is that the gather operation in data_parallel was not performed correctly and some data are left on other GPUs that leads to such error. Any suggestions on how to debug a complicated network and what might be the cause are appreciated.
st101087
I have the same problem with this code: (pytorch from source) import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.conv = nn.Conv2d(3, 3, 3, 1, 0) def forward(self, x): size = [int(s * 0.5) for s in x.shape[2:]] a = self.conv(x) b = F.upsample(x, size=size, mode='bilinear', align_corners=True) b = self.conv(b) c = F.upsample(b, size=a.shape[2:], mode='bilinear', align_corners=True) return a, b, c data = torch.rand(5, 3, 32, 32).cuda() data = Variable(data) model = MyModule() model = nn.DataParallel(model) model.cuda() outputs = model(data) loss = 0 target_a = np.random.randint(0, 3, size=(5, 30, 30)) target_a = torch.from_numpy(target_a).long().cuda() target_a = Variable(target_a, requires_grad=False) loss += F.nll_loss(F.log_softmax(outputs[0], dim=1), target_a, ignore_index=-1) target_b = np.random.randint(0, 3, size=(5, 14, 14)) target_b = torch.from_numpy(target_b).long().cuda() target_b = Variable(target_b, requires_grad=False) loss += F.nll_loss(F.log_softmax(outputs[1], dim=1), target_b, ignore_index=-1) target_c = np.random.randint(0, 3, size=(5, 30, 30)) target_c = torch.from_numpy(target_c).long().cuda() target_c = Variable(target_c, requires_grad=False) loss += F.nll_loss(F.log_softmax(outputs[2], dim=1), target_c, ignore_index=-1) loss.backward() Output: Traceback (most recent call last): File "test.py", line 68, in <module> loss.backward() File "/opt/conda/envs/pytorch-py36/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/envs/pytorch-py36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 89, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: arguments are located on different GPUs at /root/pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:233 Process finished with exit code 1 Any idea? @ptrblck
st101088
I tried your code on my machine and it’s working. What PyTorch version are you using?
st101089
I use the source version (0.4.0a0+e46043a) and multiple GPUs (with one it works fine…)
st101090
Ok, I tested it with 4 GPUs on a server running 0.3.1_post2 and it worked fine. I’ll try to compile your version and run it again.
st101091
I compiled 0.4.0a0+e46043a and got the same error. The code runs fine in the current master (0.5.0a0+a4dbd37). Probably it’s a knows bug and was fixed already. Could you try to update PyTorch?
st101092
Hi ptrblck, I also came across this problem. Do you have any easy way to update pytorch from 0.4.0a0+e46043a to 0.5.0a0+a4dbd37? Every time I tried to update pytroch, it costs me lots of time. Thank you.
st101093
Hi, My computer is offline and cannot access internet. Thus I try to install from the released zip file, but when I uncompress the zip file, it is empty in the subdirs of dir thrid_party. How can I install pytorch from source code on a computer without internet? thanks
st101094
Hi, You would need to get all the submodules from the git repo. I am not sure what is the best way to do that.
st101095
Thanks @albanD. now i get all submodules, but when I install it using “python setup.py install”, it gives me following errors: $ python setup.py install tools/build_pytorch_libs.sh: line 2: $’\r’: command not found tools/build_pytorch_libs.sh: line 10: $’\r’: command not found : invalid optionrch_libs.sh: line 11: set: - set: usage: set [-abefhkmnptuvxBCHP] [-o option-name] [–] [arg …] tools/build_pytorch_libs.sh: line 12: $’\r’: command not found tools/build_pytorch_libs.sh: line 112: syntax error near unexpected token elif' 'ools/build_pytorch_libs.sh: line 112:elif [[ “$REL_WITH_DEB_INFO” ]]; then Any ideas?
st101096
It looks like the bash script is not read properly. What is the machine are you using? Did you use a windows machine at some point to tranfer the files, it looks like end of line symbols are wrong.
st101097
Yes, I first git clone the code in my windows PC, then copy the source code to my ubuntu server. Now i fixed it. thanks
st101098
Hi @albanD, another strange thing is that pytroch cannot find GPU on my server, before upgrade it is OK. when i call pytorch, it says that “pytorch was compiled without cuDNN support”, do you know how to fix it? thanks
st101099
Do you have cudnn installed on the machine while installing? During the installation there are prints of the cuda version that is detected and the cudnn version that is detected as well. If your cudnn is at the same place as cuda then it should work fine. If you won’t want to use cudnn and don’t have it installed, I guess you can set torch.backends.cudnn.enabled=False.