How to dedicate your laptop GPU to TensorFlow only, on Ubuntu 18.04. | by Manu NALEPA | Towards Data Science
Setting tensorflow.keras.mixed_precision.Policy('mixed_float16') uses up almost all GPU memory - Stack Overflow
Optimize TensorFlow GPU performance with the TensorFlow Profiler | TensorFlow Core
Optimize TensorFlow performance using the Profiler | TensorFlow Core
Solved] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR | ProgrammerAH
python - How to run tensorflow inference for multiple models on GPU in parallel? - Stack Overflow
Reducing and Profiling GPU Memory Usage in Keras with TensorFlow Backend | Michael Blogs Code
Optimize TensorFlow performance using the Profiler | TensorFlow Core
Optimize TensorFlow GPU performance with the TensorFlow Profiler | TensorFlow Core
tensorflow ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape [256,256,15,15] and type float on error (insufficient memory error) - Code World
Revving Up Machine-Learning Inference | Electronic Design
Pushing the limits of GPU performance with XLA — The TensorFlow Blog
Optimize TensorFlow GPU performance with the TensorFlow Profiler | TensorFlow Core
The transformational role of GPU computing and deep learning in drug discovery | Nature Machine Intelligence
Memory Hygiene With TensorFlow During Model Training and Deployment for Inference | by Tanveer Khan | IBM Data Science in Practice | Medium
Using allow_growth memory option in Tensorflow and Keras | by Kobkrit Viriyayudhakorn | Kobkrit
Building an Accelerated Data Science Ecosystem: RAPIDS Hits Two Years | NVIDIA Technical Blog
Determining GPU Memory for Machine Learning Applications on VMware vSphere with Tanzu | VMware