ThinkChat🤖让你学习和工作更高效,注册即送10W Token,即刻开启你的AI之旅 广告
[TOC] ## 概述 PyTorch是一个开源的深度学习框架,最初由Facebook's AI Research lab(FAIR)开发和维护。它提供了一个用于构建和训练神经网络模型的强大平台,广泛用于研究、开发和部署深度学习模型。PyTorch的主要特点和功能包括: * 动态计算图:PyTorch采用了动态计算图的方式,使开发人员能够轻松构建和修改计算图。这与其他框架如TensorFlow的静态计算图方式相比更加直观和易于调试。 * 自动求导:PyTorch自带自动求导功能,允许你轻松计算梯度。这在训练神经网络和进行梯度下降优化时非常有用。 * 多种模块和工具:PyTorch提供了深度学习的模块,如神经网络层、优化算法、损失函数等。它还支持GPU加速,允许在CPU和GPU上执行高性能计算。 * 丰富的社区支持:PyTorch拥有庞大的用户和开发者社区,提供了大量教程、示例代码和第三方扩展,有助于加速深度学习项目的开发。 * 深度学习研究:由于其灵活性和易用性,PyTorch在深度学习研究领域非常受欢迎,研究人员可以方便地快速尝试新的想法和算法。 ## 安装 Nvida 依赖 **下载 CUDA** https://developer.nvidia.com/cuda-downloads 测试 CUDA ``` > nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Aug_15_22:09:35_Pacific_Daylight_Time_2023 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0 ``` **下载cuDNN** https://developer.nvidia.com/rdp/cudnn-download 下载后解药到 `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2` > v12.2 为你对应下载的CUDA的版本号 通过运行`C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\extras\demo_suite` 下的`deviceQuery.exe`和`bandwidthTest.exe` 来验证是否安装成功 <details> <summary>deviceQuery.exe</summary> ``` deviceQuery.exe Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "NVIDIA GeForce GTX 1080" CUDA Driver Version / Runtime Version 12.2 / 12.2 CUDA Capability Major/Minor version number: 6.1 Total amount of global memory: 8192 MBytes (8589803520 bytes) (20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores GPU Max Clock rate: 1835 MHz (1.84 GHz) Memory Clock rate: 5005 Mhz Memory Bus Width: 256-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: zu bytes Total amount of shared memory per block: zu bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: zu bytes Texture alignment: zu bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model) Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: No Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.2, CUDA Runtime Version = 12.2, NumDevs = 1, Device0 = NVIDIA GeForce GTX 1080 Result = PASS C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\extras\demo_suite>bandwidthTest.exe [CUDA Bandwidth Test] - Starting... Running on... Device 0: NVIDIA GeForce GTX 1080 Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 11896.3 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 12683.5 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 205467.8 Result = PASS NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. ``` </details> <details> <summary>bandwidthTest</summary> ``` [CUDA Bandwidth Test] - Starting... Running on... Device 0: NVIDIA GeForce GTX 1080 Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 11895.2 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 12285.0 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 205529.6 Result = PASS NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. ``` </details>