Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
117 views
in Technique[技术] by (71.8m points)

CUDA_ERROR_OUT_OF_MEMORY in tensorflow

When I started to train some neural network, it met the CUDA_ERROR_OUT_OF_MEMORY but the training could go on without error. Because I wanted to use gpu memory as it really needs, so I set the gpu_options.allow_growth = True.The logs are as follows:

I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:01:00.0
Total memory: 7.92GiB
Free memory: 7.81GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device:0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
E tensorflow/stream_executor/cuda/cuda_driver.cc:965] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
Iter 20, Minibatch Loss= 40491.636719
...

And after using nvidia-smi command, it gets:

+-----------------------------------------------------------------------------+   
| NVIDIA-SMI 367.27                 Driver Version: 367.27                            
|-------------------------------+----------------------+----------------------+   
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |  
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M.
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:01:00.0     Off |                  N/A |   
| 40%   61C    P2    46W / 180W |   8107MiB /  8111MiB |     96%      Default |   
+-------------------------------+----------------------+----------------------+   
|   1  GeForce GTX 1080    Off  | 0000:02:00.0     Off |                  N/A |   
|  0%   40C    P0    40W / 180W |      0MiB /  8113MiB |      0%      Default |   
+-------------------------------+----------------------+----------------------+   
                                                                              │
+-----------------------------------------------------------------------------+   
| Processes:                                                       GPU Memory |   
|  GPU       PID  Type  Process name                               Usage      |   
|=============================================================================|   
|    0     22932    C   python                                        8105MiB |
+-----------------------------------------------------------------------------+ 

After I commented the gpu_options.allow_growth = True, I trained the net again and everything was normal. There was no the problem of CUDA_ERROR_OUT_OF_MEMORY. Finally, ran the nvidia-smi command, it gets:

+-----------------------------------------------------------------------------+   
| NVIDIA-SMI 367.27                 Driver Version: 367.27                            
|-------------------------------+----------------------+----------------------+   
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |  
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M.
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:01:00.0     Off |                  N/A |   
| 40%   61C    P2    46W / 180W |   7793MiB /  8111MiB |     99%      Default |   
+-------------------------------+----------------------+----------------------+   
|   1  GeForce GTX 1080    Off  | 0000:02:00.0     Off |                  N/A |   
|  0%   40C    P0    40W / 180W |      0MiB /  8113MiB |      0%      Default |   
+-------------------------------+----------------------+----------------------+   
                                                                              │
+-----------------------------------------------------------------------------+   
| Processes:                                                       GPU Memory |   
|  GPU       PID  Type  Process name                               Usage      |   
|=============================================================================|   
|    0     22932    C   python                                        7791MiB |
+-----------------------------------------------------------------------------+ 

I have two questions about it. Why did the CUDA_OUT_OF_MEMORY come out and the procedure went on normally? why did the memory usage become smaller after commenting allow_growth = True.

question from:https://stackoverflow.com/questions/39465503/cuda-error-out-of-memory-in-tensorflow

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

In case it's still relevant for someone, I encountered this issue when trying to run Keras/Tensorflow for the second time, after a first run was aborted. It seems the GPU memory is still allocated, and therefore cannot be allocated again. It was solved by manually ending all python processes that use the GPU, or alternatively, closing the existing terminal and running again in a new terminal window.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...