首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >从nvprof输出计算内存带宽的奇怪结果

从nvprof输出计算内存带宽的奇怪结果
EN

Stack Overflow用户
提问于 2017-09-05 22:11:18
回答 1查看 337关注 0票数 1

如何计算gpu内存带宽给出:

  1. 数据样本大小(以Gb表示)。
  2. 内核执行时间(nvprof输出)。

GPU:gtx 1050 ti

库达:8.0

OS:Windows 10

艾德:Visual studio 2015

通常我会使用这样的公式:bandwidth [Gb/s] = data_size [Gb] / average_time [s]

但是,当我将上面的方程用于get_mem_kernel()内核时,我得到了错误的结果:441,93 [Gb/s]

我认为这个结果是错误的,因为在gtx 1050 ti的技术规范中,全局内存带宽是112 [Gb\s]

我在哪里犯了一个错误,或者还有什么我不明白的?

样本代码:

代码语言:javascript
复制
// cpp libs:
#include <iostream>
#include <sstream>
#include <fstream>
#include <iomanip>
#include <stdexcept>

// cuda libs:
#include <cuda_runtime.h>
#include <device_launch_parameters.h>

#define ERROR_CHECK(CHECK_) if (CHECK_ != cudaError_t::cudaSuccess) { std::cout << "cuda error" << std::endl; throw std::runtime_error("cuda error"); }

using data_type = double;

template <typename T> constexpr __forceinline__
T div_s(T dividend, T divisor)
{
    using P = double;
    return static_cast <T> (static_cast <P> (dividend + divisor - 1) / static_cast <P> (divisor));
}

__global__
void set_mem_kernel(const unsigned int size, data_type * const in_data)
{
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    if (idx < size)
    {
        in_data[idx] = static_cast <data_type> (idx);
    }
}

__global__
void get_mem_kernel(const unsigned int size, data_type * const in_data)
{
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    data_type val = 0;
    if (idx < size)
    {
        val = in_data[idx];
    }
}

struct quit_program
{
public:
    ~quit_program()
    {
        try
        {
            ERROR_CHECK(cudaDeviceReset());
        }
        catch (...) {}
    }
} quit;

int main()
{
    unsigned int size = 12500000; // 100 mb;
    size_t       byte = size * sizeof(data_type);

    dim3 threads (256, 1, 1);
    dim3 blocks  (div_s(size, threads.x), 1, 1);

    std::cout << size << std::endl;
    std::cout << byte << std::endl;
    std::cout << std::endl;

    std::cout << threads.x << std::endl;
    std::cout << blocks.x  << std::endl;
    std::cout << std::endl;

    // data:
    data_type * d_data = nullptr;
    ERROR_CHECK(cudaMalloc(&d_data, byte));

    for (int i = 0; i < 20000; i++)
    {
        set_mem_kernel <<<blocks, threads>>> (size, d_data);
        ERROR_CHECK(cudaDeviceSynchronize());
        ERROR_CHECK(cudaGetLastError());

        get_mem_kernel <<<blocks, threads>>> (size, d_data);
        ERROR_CHECK(cudaDeviceSynchronize());
        ERROR_CHECK(cudaGetLastError());
    }

    // Exit:
    ERROR_CHECK(cudaFree(d_data));
    ERROR_CHECK(cudaDeviceReset());
    return EXIT_SUCCESS;
}

防伪结果:

代码语言:javascript
复制
D:\Dev\visual_studio\nevada_test_site\x64\Release>nvprof ./cuda_test.exe
12500000
100000000

256
48829

==10508== NVPROF is profiling process 10508, command: ./cuda_test.exe
==10508== Warning: Unified Memory Profiling is not supported on the current configuration because a pair of devices without peer-to-peer support is detected on this multi-GPU setup. When peer mappings are not available, system falls back to using zero-copy memory. It can cause kernels, which access unified memory, to run slower. More details can be found at: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-managed-memory
==10508== Profiling application: ./cuda_test.exe
==10508== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
 81.12%  19.4508s     20000  972.54us  971.22us  978.32us  set_mem_kernel(unsigned int, double*)
 18.88%  4.52568s     20000  226.28us  224.45us  271.14us  get_mem_kernel(unsigned int, double*)

==10508== API calls:
Time(%)      Time     Calls       Avg       Min       Max  Name
 97.53%  26.8907s     40000  672.27us  247.98us  1.7566ms  cudaDeviceSynchronize
  1.61%  443.32ms     40000  11.082us  5.8340us  183.43us  cudaLaunch
  0.51%  141.10ms         1  141.10ms  141.10ms  141.10ms  cudaMalloc
  0.16%  43.648ms         1  43.648ms  43.648ms  43.648ms  cudaDeviceReset
  0.08%  22.182ms     80000     277ns       0ns  121.07us  cudaSetupArgument
  0.06%  15.437ms     40000     385ns       0ns  24.433us  cudaGetLastError
  0.05%  12.929ms     40000     323ns       0ns  57.253us  cudaConfigureCall
  0.00%  1.1932ms        91  13.112us       0ns  734.09us  cuDeviceGetAttribute
  0.00%  762.17us         1  762.17us  762.17us  762.17us  cudaFree
  0.00%  359.93us         1  359.93us  359.93us  359.93us  cuDeviceGetName
  0.00%  8.3880us         1  8.3880us  8.3880us  8.3880us  cuDeviceTotalMem
  0.00%  2.5520us         3     850ns     364ns  1.8230us  cuDeviceGetCount
  0.00%  1.8240us         3     608ns     365ns  1.0940us  cuDeviceGet

CUDA Samples\v8.0\1_Utilities\bandwidthTest结果:

代码语言:javascript
复制
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GTX 1050 Ti
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     11038.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     11469.6

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     95214.0

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-09-06 00:12:17

编译器正在优化内存读取。这是由Robert Crovella指出的。谢谢你的帮助-我永远猜不到。

详细说明:

我的编译器正在优化val变量,并通过扩展内存读取。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/46064030

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档