I really can't figure out what's going on here.
I have a compute shader that takes in an FFT result (from real input) and computes the powers of each bin, storing them in a different buffer (UAV). The FFT implementation is that of the D3DCSX library.
The shader in question:
struct Complex {
float real;
float imag;
};
RWStructuredBuffer<Complex> g_result : register(u0);
RWStructuredBuffer<float> g_powers : register(u1);
[numthreads(1, 1, 1)] void main(uint3 id : SV_DispatchThreadID) {
const uint bin = id.x;
const float real = g_result[bin + 1].real;
const float imag = g_result[bin + 1].imag;
const float power = real * real + imag * imag;
const float mag = sqrt(power);
const float db = 10.0f * log10(1.0f + power);
g_powers[bin] = power;
}
The buffer creation code:
//The buffer in which the resulting powers are stored (m_result_buffer1)
buffer_desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
buffer_desc.ByteWidth = sizeof(float) * NumBins();
buffer_desc.CPUAccessFlags = 0;
buffer_desc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS;
buffer_desc.StructureByteStride = sizeof(float);
buffer_desc.Usage = D3D11_USAGE_DEFAULT;
hr = m_device->CreateBuffer (
&buffer_desc,
nullptr,
&m_result_buffer1
); HR_THROW();
//UAV for m_result_buffer1
view_desc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
view_desc.Buffer.FirstElement = 0;
view_desc.Format = DXGI_FORMAT_R32_TYPELESS;
view_desc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_RAW;
view_desc.Buffer.NumElements = NumBins();
hr = m_device->CreateUnorderedAccessView (
m_result_buffer1,
&view_desc,
&m_result_view
); HR_THROW();
//Buffer for reading powers to the CPU
buffer_desc.BindFlags = 0;
buffer_desc.ByteWidth = sizeof(float) * NumBins();
buffer_desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
buffer_desc.MiscFlags = 0;
buffer_desc.StructureByteStride = sizeof(float);
buffer_desc.Usage = D3D11_USAGE_STAGING;
hr = m_device->CreateBuffer (
&buffer_desc,
nullptr,
&m_result_buffer2
); HR_THROW();
The dispatch code:
CComPtr<ID3D11UnorderedAccessView> result_view;
hr = m_fft->ForwardTransform (
m_sample_view,
&result_view
); HR_THROW();
ID3D11UnorderedAccessView* views[] = {
result_view, //FFT UAV (u0)
m_result_view //Power UAV (u1)
};
m_context->CSSetShader(m_power_cs, nullptr, 0);
m_context->CSSetUnorderedAccessViews(0, 2, views, nullptr);
m_context->Dispatch(NumBins(), 1, 1);
And finally the CPU mapping code:
m_context->CopyResource(m_result_buffer2, m_result_buffer1);
D3D11_MAPPED_SUBRESOURCE sub = { 0 };
m_context->Map(m_result_buffer2, 0, D3D11_MAP_READ, 0, &sub);
memcpy(result, sub.pData, sizeof(float) * NumBins());
m_context->Unmap(m_result_buffer2, 0);
What happens is this shader appears to have every thread write to the same index in the output buffer. The mapped buffer always reads a correct value for the first bin, then 0.0f for every other bin. The equivalent code on the CPU runs just fine. What's weird is I've placed conditionals and know that bin is not just 0 all the time, and that the power of every bin outside bin 0 is also not always 0.0f. I've also tried writing to multiple bins on the same thread using a for loop, and the same thing happens. What am I doing wrong?
I have a hunch that it's the buffer creation code or mapping code that's at the root of the problem. I know I'm running the correct number of threads on the GPU and that the dispatch ID's are correct, it's the CPU-side result that's wrong.
Problem Solved!
I was using a RWStructuredBuffer to represent a RWByteOrderBuffer. Not entirely sure how that led to this result, but it did. So, the FFT result is now a RWByteOrderBuffer. What was strange about this buffer, though, was the fact that the D3DCSX implementation spaced the float values so far apart - possibly for cache reasons, but I'm honestly not too sure why. This is my compute shader now (computing decibels instead of powers this time - an unrelated change):
RWByteAddressBuffer g_result : register(u0);
RWStructuredBuffer<float> g_decibels : register(u1);
[numthreads(256, 1, 1)] void main(uint3 id : SV_DispatchThreadID) {
const float real = asfloat(g_result.Load(id.x * 8 + 0));
const float imag = asfloat(g_result.Load(id.x * 8 + 4));
const float power = real * real + imag * imag;
const float db = 10.0f * log10(1.0f + power);
g_decibels[id.x] = db;
}
I changed my decibel buffer's description to that of a structured buffer, though, just to make things easier for me:
buffer_desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
buffer_desc.ByteWidth = sizeof(float) * NumBins();
buffer_desc.CPUAccessFlags = 0;
buffer_desc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
buffer_desc.StructureByteStride = sizeof(float);
buffer_desc.Usage = D3D11_USAGE_DEFAULT;
hr = m_device->CreateBuffer (
&buffer_desc,
nullptr,
&m_result_buffer1
); HR_THROW();
view_desc.Buffer.FirstElement = 0;
view_desc.Buffer.Flags = 0;
view_desc.Buffer.NumElements = NumBins();
view_desc.Format = DXGI_FORMAT_UNKNOWN;
view_desc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
hr = m_device->CreateUnorderedAccessView (
m_result_buffer1,
&view_desc,
&m_result_view
); HR_THROW();
This is why g_decibels is still a RWStructuredBuffer.
Still unknown to me is whether or not it matters that the result buffer is read/write when only accesses are necessary - if I change g_result to a regular ByteOrderBuffer I get no output. But at least it's working now.
Related
I'm testing writing to 2D and 3D textures in compute shaders, outputting a gradient noise texture consisting of 32 bit floats. Writing to a 2D texture works fine, but writing to a 3D texture isn't. Are there additional considerations that need to be made when creating a 3D texture when compared to a 2D texture?
Code of how I'm defining the 3D texture below:
HRESULT BaseComputeShader::CreateTexture3D(UINT width, UINT height, UINT depth, DXGI_FORMAT format, ID3D11Texture3D** texture)
{
D3D11_TEXTURE3D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(textureDesc));
textureDesc.Width = width;
textureDesc.Height = height;
textureDesc.Depth = depth;
textureDesc.MipLevels = 1;
textureDesc.Format = format;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
return renderer->CreateTexture3D(&textureDesc, 0, texture);
}
HRESULT BaseComputeShader::CreateTexture3DUAV(UINT depth, DXGI_FORMAT format, ID3D11Texture3D** texture, ID3D11UnorderedAccessView** unorderedAccessView)
{
D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
ZeroMemory(&uavDesc, sizeof(uavDesc));
uavDesc.Format = format;
uavDesc.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE3D;
uavDesc.Texture3D.MipSlice = 0;
uavDesc.Texture3D.FirstWSlice = 0;
uavDesc.Texture3D.WSize = depth;
return renderer->CreateUnorderedAccessView(*texture, &uavDesc, unorderedAccessView);
}
HRESULT BaseComputeShader::CreateTexture3DSRV(DXGI_FORMAT format, ID3D11Texture3D** texture, ID3D11ShaderResourceView** shaderResourceView)
{
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
ZeroMemory(&srvDesc, sizeof(srvDesc));
srvDesc.Format = format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE3D;
srvDesc.Texture3D.MostDetailedMip = 0;
srvDesc.Texture3D.MipLevels = 1;
return renderer->CreateShaderResourceView(*texture, &srvDesc, shaderResourceView);
}
And how I'm writing to it in the compute shader:
// The texture we're writing to
RWTexture3D<float> outputTexture : register(u0);
[numthreads(8, 8, 8)]
void main(uint3 DTid : SV_DispatchThreadID)
{
float noiseValue = 0.0f;
float value = 0.0f;
float localAmplitude = amplitude;
float localFrequency = frequency;
// Loop for the number of octaves, running the noise function as many times as desired (8 is usually sufficient)
for (int k = 0; k < octaves; k++)
{
noiseValue = noise(float3(DTid.x * localFrequency, DTid.y * localFrequency, DTid.z * localFrequency)) * localAmplitude;
value += noiseValue;
// Calculate a new amplitude based on the input persistence/gain value
// amplitudeLoop will get smaller as the number of layers (i.e. k) increases
localAmplitude *= persistence;
// Calculate a new frequency based on a lacunarity value of 2.0
// This gives us 2^k as the frequency
// i.e. Frequency at k = 4 will be f * 2^4 as we have looped 4 times
localFrequency *= 2.0f;
}
// Output value to 2D index in the texture provided by thread indexing
outputTexture[DTid.xyz] = value;
}
And finally, how I'm running the shader:
// Set the shader
deviceContext->CSSetShader(computeShader, nullptr, 0);
// Set the shader's buffers and views
deviceContext->CSSetConstantBuffers(0, 1, &cBuffer);
deviceContext->CSSetUnorderedAccessViews(0, 1, &textureUAV, nullptr);
// Launch the shader
deviceContext->Dispatch(512, 512, 512);
// Reset the shader now we're done
deviceContext->CSSetShader(nullptr, nullptr, 0);
// Reset the shader views
ID3D11UnorderedAccessView* ppUAViewnullptr[1] = { nullptr };
deviceContext->CSSetUnorderedAccessViews(0, 1, ppUAViewnullptr, nullptr);
// Create the shader resource view for access in other shaders
HRESULT result = CreateTexture3DSRV(DXGI_FORMAT_R32_FLOAT, &texture, &textureSRV);
if (result != S_OK)
{
MessageBox(NULL, L"Failed to create texture SRV after compute shader execution", L"Failed", MB_OK);
exit(0);
}
My bad, simple mistake. Compute shader threads are limited in number. In the compute shader you're limited to a total of 1024 threads, and the dispatch call cannot dispatch more than 65535 thread groups. The HLSL compiler will catch the former issue, but the Visual C++ compiler will not catch the latter issue.
If you create a texture of 512 * 512 * 512 (which seems what you are trying to achieve), your dispatch needs to be divided by groups:
deviceContext->Dispatch(512 / 8, 512 / 8, 512 / 8);
In your previous case, the dispatch was :
512*8 * 512*8 * 512*8 = 68719476736 units
Which very likely triggered the time out detection and crashes the driver
Also the limit of 65535 is per dimension, so in your case you are completely safe to run this.
And last one, you can create both shader resource view and unordered view right after creating your 3d texture (before the dispatch call).
This is generally recommended to avoid mixing context code and resource creation code.
On resource creation, your check is not valid either :
if (result != S_OK)
HRESULT success condition is >= 0
you can use the built in macro instead eg :
if (SUCCEEDED(result))
Im trying to compile my halide program to jit to use it later in code few times on different images. But i think i making something wrong, can anyone correct me?
First I create halide function to run:
void m_gammaFunctionTMOGenerate()
{
Halide::ImageParam img(Halide::type_of<float>(), 3);
img.set_stride(0, 4);
img.set_stride(2, 1);
Halide::Var x, y, c;
Halide::Param<float> key, sat, clampMax, clampMin;
Halide::Param<bool> cS;
Halide::Func gamma;
// algorytm
//img.width() , img.height();
if (cS.get())
{
float k1 = 1.6774;
float k2 = 0.9925;
sat.set((1 + k1) * pow(key.get(), k2) / (1 + k1 * pow(key.get(), k2)));
}
Halide::Expr luminance = img(x, y, 0) * 0.072186f + img(x, y, 1) * 0.715158f + img(x, y, 2) * 0.212656f;
Halide::Expr ldr_lum = (luminance - clampMin) / (clampMax - clampMin);
Halide::clamp(ldr_lum, 0.f, 1.f);
ldr_lum = Halide::pow(ldr_lum, key);
Halide::Expr imLum = img(x, y, c) / luminance;
imLum = Halide::pow(imLum, sat) * ldr_lum;
Halide::clamp(imLum, 0.f, 1.f);
gamma(x, y, c) = imLum;
// rozkład
gamma.vectorize(x, 16).parallel(y);
// kompilacja
auto & obuff = gamma.output_buffer();
obuff.set_stride(0, 4);
obuff.set_stride(2, 1);
obuff.set_extent(2, 3);
std::vector<Halide::Argument> arguments = { img, key, sat, clampMax, clampMin, cS };
m_gammaFunction = (gammafunction)(gamma.compile_jit());
}
store it in pointer:
typedef int(*gammafunction)(buffer_t*, float, float, float, float, bool, buffer_t*);
gammafunction m_gammaFunction;
then i try to run it:
buffer_t output_buf = { 0 };
//// The host pointers point to the start of the image data:
buffer_t buf = { 0 };
buf.host = (uint8_t *)data; // Might also need const_cast
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
// // If the buffer doesn't start at (0, 0), then assign mins
output_buf.extent[0] = buf.extent[0] = width; // In elements, not bytes
output_buf.extent[1] = buf.extent[1] = height; // In elements, not bytes
output_buf.extent[2] = buf.extent[2] = 4; // Assuming RGBA
// // No need to assign additional extents as they were init'ed to zero above
output_buf.stride[0] = buf.stride[0] = 4; // RGBA interleaved
output_buf.stride[1] = buf.stride[1] = width * 4; // Assuming no line padding
output_buf.stride[2] = buf.stride[2] = 1; // Channel interleaved
output_buf.elem_size = buf.elem_size = sizeof(float);
// Run the pipeline
int error = m_photoFunction(&buf, params[0], &output_buf);
But it doesn't work...
Error:
Exception thrown at 0x000002974F552DE0 in Viewer.exe: 0xC0000005: Access violation executing location 0x000002974F552DE0.
If there is a handler for this exception, the program may be safely continued.
Edit:
Here is my code for running function:
buffer_t output_buf = { 0 };
//// The host pointers point to the start of the image data:
buffer_t buf = { 0 };
buf.host = (uint8_t *)data; // Might also need const_cast
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
// // If the buffer doesn't start at (0, 0), then assign mins
output_buf.extent[0] = buf.extent[0] = width; // In elements, not bytes
output_buf.extent[1] = buf.extent[1] = height; // In elements, not bytes
output_buf.extent[2] = buf.extent[2] = 3; // Assuming RGBA
// // No need to assign additional extents as they were init'ed to zero above
output_buf.stride[0] = buf.stride[0] = 4; // RGBA interleaved
output_buf.stride[1] = buf.stride[1] = width * 4; // Assuming no line padding
output_buf.stride[2] = buf.stride[2] = 1; // Channel interleaved
output_buf.elem_size = buf.elem_size = sizeof(float);
// Run the pipeline
int error = m_gammaFunction(&buf, params[0], params[1], params[2], params[3], params[4] > 0.5 ? true : false, &output_buf);
if (error) {
printf("Halide returned an error: %d\n", error);
return -1;
}
memcpy(output, data, size * sizeof(float));
can anyone help me with it?
Edit:
Thanks to #KhouriGiordano I found out what I was doing wrong. Indeed I switched from AOT compiling to this code. So now my code looks like that:
class GammaOperator
{
public:
GammaOperator();
int realize(buffer_t * input, float params[], buffer_t * output, int width);
private:
HalideFloat m_key;
HalideFloat m_sat;
HalideFloat m_clampMax;
HalideFloat m_clampMin;
HalideBool m_cS;
Halide::ImageParam m_img;
Halide::Var x, y, c;
Halide::Func m_gamma;
};
GammaOperator::GammaOperator()
: m_img( Halide::type_of<float>(), 3)
{
Halide::Expr w = (1.f + 1.6774f) * pow(m_key.get(), 0.9925f) / (1.f + 1.6774f * pow(m_key.get(), 0.9925f));
Halide::Expr sat = Halide::select(m_cS, m_sat, w);
Halide::Expr luminance = m_img(x, y, 0) * 0.072186f + m_img(x, y, 1) * 0.715158f + m_img(x, y, 2) * 0.212656f;
Halide::Expr ldr_lum = (luminance - m_clampMin) / (m_clampMax - m_clampMin);
ldr_lum = Halide::clamp(ldr_lum, 0.f, 1.f);
ldr_lum = Halide::pow(ldr_lum, m_key);
Halide::Expr imLum = m_img(x, y, c) / luminance;
imLum = Halide::pow(imLum, sat) * ldr_lum;
imLum = Halide::clamp(imLum, 0.f, 1.f);
m_gamma(x, y, c) = imLum;
}
int GammaOperator::realize(buffer_t * input, float params[], buffer_t * output, int width)
{
m_img.set(Halide::Buffer(Halide::type_of<float>(), input));
m_img.set_stride(0, 4);
m_img.set_stride(1, width * 4);
m_img.set_stride(2, 4);
// algorytm
m_gamma.vectorize(x, 16).parallel(y);
//params[0], params[1], params[2], params[3], params[4] > 0.5 ? true : false
//{ img, key, sat, clampMax, clampMin, cS };
m_key.set(params[0]);
m_sat.set(params[1]);
m_clampMax.set(params[2]);
m_clampMin.set(params[3]);
m_cS.set(params[4] > 0.5f ? true : false);
//// kompilacja
m_gamma.realize(Halide::Buffer(Halide::type_of<float>(), output));
return 0;
}
and i use it like that:
buffer_t output_buf = { 0 };
//// The host pointers point to the start of the image data:
buffer_t buf = { 0 };
buf.host = (uint8_t *)data; // Might also need const_cast
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
// // If the buffer doesn't start at (0, 0), then assign mins
output_buf.extent[0] = buf.extent[0] = width; // In elements, not bytes
output_buf.extent[1] = buf.extent[1] = height; // In elements, not bytes
output_buf.extent[2] = buf.extent[2] = 4; // Assuming RGBA
// // No need to assign additional extents as they were init'ed to zero above
output_buf.stride[0] = buf.stride[0] = 4; // RGBA interleaved
output_buf.stride[1] = buf.stride[1] = width * 4; // Assuming no line padding
output_buf.stride[2] = buf.stride[2] = 1; // Channel interleaved
output_buf.elem_size = buf.elem_size = sizeof(float);
// Run the pipeline
int error = s_gamma->realize(&buf, params, &output_buf, width);
but it is still crashing on m_gamma.realize function with info in console:
Error: Constraint violated: f0.stride.0 (4) == 1 (1)
By using Halide::Param::get(), you are extracting the (default of 0) value from the Param object at the time you call get(). If you want to use the parameter value given at the time you call the generated function, just use it without calling get and it should be implicitly converted to an Expr.
Since Param is not convertible to a boolean, the Halide way of doing an if is Halide::select().
You aren't using the clamped return value of Halide::clamp().
I don't see cS being used by the Halide code, only the C code.
Now to your JIT problem. It looks like you started doing AOT compilation and switched to JIT.
You make a std::vector<Halide::Argument> but don't pass it anywhere. How can Halide know what Param you want to use? It looks at the Func and finds references to ImageParam and Param objects.
How can you know what order it expects the Param? You have no control over this. I was able to dump the bitcode by defining HL_GENBITCODE=1 and then view that with llvm-dis to see your function:
int gamma
( buffer_t *img
, float clampMax
, float key
, float clampMin
, float sat
, void *user_context
, buffer_t *result
);
Use gamma.realize(Halide::Buffer(Halide::type_of<float>(), &output_buf)) instead of using gamma.compile_jit() and trying to call the generated function properly.
For one time use:
Use Image instead of ImageParam.
Use Expr instead of Param.
For repeated use with a single JIT compile:
Keep the ImageParam and Param around and set them before realizing the Func.
I cannot get bindless textures referencing linear memory to work -- the result is always a zero/black read. My initialization code:
The buffer:
int const num = 4 * 16;
int const size = num * sizeof(float);
cudaMalloc(buffer, size);
auto b = new float[num];
for (int i = 0; i < num; ++i)
{
b[i] = i % 4 == 0 ? 1 : 1;
}
cudaMemcpy(*buffer, b, size, cudaMemcpyHostToDevice);
The texture object:
cudaTextureDesc td;
memset(&td, 0, sizeof(td));
td.normalizedCoords = 0;
td.addressMode[0] = cudaAddressModeClamp;
td.addressMode[1] = cudaAddressModeClamp;
td.addressMode[2] = cudaAddressModeClamp;
td.readMode = cudaReadModeElementType;
td.sRGB = 0;
td.filterMode = cudaFilterModePoint;
td.maxAnisotropy = 16;
td.mipmapFilterMode = cudaFilterModePoint;
td.minMipmapLevelClamp = 0;
td.maxMipmapLevelClamp = 0;
td.mipmapLevelBias = 0;
struct cudaResourceDesc resDesc;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeLinear;
resDesc.res.linear.devPtr = *buffer;
resDesc.res.linear.sizeInBytes = size;
resDesc.res.linear.desc.f = cudaChannelFormatKindFloat;
resDesc.res.linear.desc.x = 32;
resDesc.res.linear.desc.y = 32;
resDesc.res.linear.desc.z = 32;
resDesc.res.linear.desc.w = 32;
checkCudaErrors(cudaCreateTextureObject(texture, &resDesc, &td, nullptr));
The kernel:
__global__ void
d_render(uchar4 *d_output, uint imageW, uint imageH, float* buffer, cudaTextureObject_t texture)
{
uint x = blockIdx.x * blockDim.x + threadIdx.x;
uint y = blockIdx.y * blockDim.y + threadIdx.y;
if ((x < imageW) && (y < imageH))
{
// write output color
uint i = y * imageW + x;
//auto f = make_float4(buffer[0], buffer[1], buffer[2], buffer[3]);
auto f = tex1D<float4>(texture, 0);
d_output[i] = to_uchar4(f * 255);
}
}
The texture object is initialized with something sensible (4099) when given to the kernel. The Buffer version works flawlessly.
Why does the texture object return zero/black?
As per the CUDA programming reference guide You need to use tex1Dfetch() to read from one-dimensional textures bound to linear texture memory, and tex1D to read from one-dimensional textures bound to CUDA arrays. This applies to both CUDA texture references and CUDA textures passed by object.
The difference between the two APIs is the coordinate argument. Textures bound to linear memory can only be addressed in texture coordinates (hence the integer coordinate argument in text1Dfetch()), whereas arrays support both texture and normalised coordinates (thus the float coordinate argument in tex1D).
currently i m working on the Tizen IDE.
I had read the input data from the microPhone and apply the FFT on it... but everytime i gets the nan output.
here is my code..
ShortBuffer *pBuffer1 = pData->AsShortBufferN();
fft = new KissFFT(BUFFER_SIZE);
std::vector<short> input(pBuffer1->GetPointer(),
pBuffer1->GetPointer() + BUFFER_SIZE); // this contains audio data
std::vector<float> specturm(BUFFER_SIZE);
fft->spectrum(input, specturm);
applying FFT..
void KissFFT::spectrum(KissFFTO* fft, std::vector<short>& samples2,
std::vector<float>& spectrum) {
int len = fft->numSamples / 2 + 1;
kiss_fft_scalar* samples = (kiss_fft_scalar*) &samples2[0];
kiss_fftr(fft->config, samples, fft->spectrum);
for (int i = 0; i < len; i++) {
float re = scale(fft->spectrum[i].r) * fft->numSamples;
float im = scale(fft->spectrum[i].i) * fft->numSamples;
if (i > 0)
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples / 2);
else
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples);
AppLog("specturm %d",spectrum[i]); // everytime returns returns nan output
}
}
KissFFTO* KissFFT::create(int numSamples) {
KissFFTO* fft = new KissFFTO();
fft->config = kiss_fftr_alloc(numSamples/2, 0, NULL, NULL);
fft->spectrum = new kiss_fft_cpx[numSamples / 2 + 1];
fft->numSamples = numSamples;
return fft;
}
In fft->config there should be some parameters about the size of FFT like 2048, 4096, i.e. powers of 2. If you increase this value, you can get more resolution in frequency.
I'm using a constant buffer to pass data to my shaders at every frame, and I'm running into an issue where the values of some of the members of the buffer point to the same memory.
When I use the Visual Studio 2012 debugging tools, it looks like the data is being set in the buffer more or less correctly:
0 [0x00000000-0x00000003] | +0
1 [0x00000004-0x00000007] | +1
2 [0x00000008-0x0000000b] | +1
3 [0x0000000c-0x0000000f] | +1
4 [0x00000010-0x00000013] | +0.78539819
5 [0x00000014-0x00000017] | +1.1760513
6 [0x00000018-0x0000001b] | +0
7 [0x0000001c-0x0000001f] | +1
The problem is that when I debug the shader, the sunAngle and phaseFunction both have the same value - specifically 0.78539819, which should be the value of sunAngle only. It does change to 1.1760513 if I swap the order of the two floats, but both will still be the same. I thought I'd packed everything together correctly, but am I missing how to define exactly what constants are in each part of the buffer?
Here's the C++ structure I'm using:
struct SunData {
DirectX::XMFLOAT4 sunPosition;
float sunAngle;
float phaseFunctionResult;
};
And the shader buffer looks like this:
// updated as the sun moves through the sky
cbuffer sunDependent : register( b1 )
{
float4 sunPosition;
float sunAngle; // theta
float phaseFunctionResult; // F( theta, g )
}
Here's the code I'm using to initialize the buffer:
XMVECTOR pos = XMVectorSet( 0, 1, 1, 1 );
XMStoreFloat3( &_sunPosition, pos );
XMStoreFloat4( &_sun.sunPosition, pos );
_sun.sunAngle = XMVectorGetX(
XMVector3AngleBetweenVectors( pos, XMVectorSet( 0, 1, 0, 0 ) )
);
_sun.phaseFunctionResult = _planet.phaseFunction( _sun.sunAngle );
// Fill in a buffer description.
D3D11_BUFFER_DESC cbDesc;
cbDesc.ByteWidth = sizeof( SunData ) + 8;
cbDesc.Usage = D3D11_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
cbDesc.StructureByteStride = 0;
// Fill in the subresource data.
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &_sun;
data.SysMemPitch = 0;
data.SysMemSlicePitch = 0;
// Create the buffer.
ID3D11Buffer *constantBuffer = nullptr;
HRESULT hr = _d3dDevice->CreateBuffer(
&cbDesc,
&data,
&constantBuffer
);
assert( SUCCEEDED( hr ) );
// Set the buffer.
_d3dDeviceContext->VSSetConstantBuffers( 1, 1, &constantBuffer );
_d3dDeviceContext->PSSetConstantBuffers( 1, 1, &constantBuffer );
Release( constantBuffer );
And here's the pixel shader that's using the values:
float4 main( in ATMOS_PS_INPUT input ) : SV_TARGET
{
float R = sunAngle * sunPosition.x * sunIntensity.x
* attenuationCoefficient.x
* phaseFunctionResult;
return float4( R, 1, 1, 1 );
}
It looks like a padding issue like in this question: Question
All constant buffers should be sized to be dividble by sizeof(four-component vector) (doc)