IDXGISwapChain::Present takes too long

Started by
4 comments, last by Devaniti 3 years, 3 months ago

I noticed that IDXGISwapChain::Present takes about ~3.5ms (while having stripped all my own command recording and execution from the render loop).

According to PIX, this seems to be related to internal command recording and execution in DXGI itself.

Any ideas what might cause this? As a reference, the vanilla ImGui D3D12/x64 example (without VSync) with Debug configuration runs ≤ 1ms on the same machine.

  • DXGI and D3D12 debug modes are off.
  • D3D12 validation layer is off.
  • Swap chain descriptor (I already tried changing format, flags, etc.):
DXGI_SWAP_CHAIN_DESC1
{
...
.Format      = DXGI_FORMAT_R10G10B10A2_UNORM,
.Stereo      = FALSE,
.SampleDesc  = { .Count = 1u, .Quality = 0u },
.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT,
.BufferCount = 3u,
.Scaling     = DXGI_SCALING_STRETCH,
.SwapEffect  = DXGI_SWAP_EFFECT_FLIP_DISCARD,
.AlphaMode   = DXGI_ALPHA_MODE_IGNORE,
.Flags       = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH | DXGI_SWAP_CHAIN_FLAG_ALLOW_TEARING
}

🧙

Advertisement

Naïve quesiton possibly, when you present, are you waiting for the VSync?

@Steven Ford

No VSync:

m_swap_chain->Present(0u, DXGI_PRESENT_ALLOW_TEARING);

Edit: As a reference, ImGui D3D12/x64 example with Debug configuration without VSync (as opposed to the Github version) …

🧙

Figured out the difference: my notebook has a NVidia GTX 960M and an Intel HD Graphics 5500. My own application uses the former while ImGui uses the latter (= default). When I use the NVidia GTX 960M for both, I observe the same delays.

🧙

There's an overhead in when rendering on one GPU and presenting on another GPU due to copying and synchronising between them.
You can see that Intel iGPU is also used by your app with GPUView in that scenario, or make sure that it's not the case if it's not used.

This topic is closed to new replies.

Advertisement