Problems with fcvDrawContouru8 for 64bit systems - c++

I am using fastcv v1.7 to develop an image processing algorithm, a part of the process includes finding contours from an image, selecting a choice few contours among them and then drawing those contours only.
This code block runs smoothly in 32bit systems producing expected output but while on 64bit systems same code crashes unexpectedly during the loop which executes fcvDrawContouru8. The crash is unexpected as sometimes loop iterataes 2 or 3 times and sometimes crashes on first iteration. Can't seem to work out if the problem is with memory allocation in 64bit or with fastcv itself. Any suggestions will be helpful.
uint8_t* dist_fcv = (uint8_t*)fcvMemAlloc(dist_8u.cols*dist_8u.rows*OPT_CV_ELEM_SIZE(OPT_CV_8UC1), FCV_ALIGN);
memset(dist_fcv, 0, dist_8u.cols*dist_8u.rows*OPT_CV_ELEM_SIZE(OPT_CV_8UC1));
uint32_t maxNumContours = MAX_CNT;
uint32_t sizeOfpBuffer = 0;
uint32_t maxPoints= ((2*dist_8u.cols) + (2 * dist_8u.rows));
uint32_t pNumContours = 0;
uint32_t pNumContourPoints[MAX_CNT] = {0};
uint32_t** pContourStartPointsfind = (uint32_t**)fcvMemAlloc(MAX_CNT*2*sizeof(uint32_t*),16);
sizeOfpBuffer = (MAX_CNT * 2 * maxPoints * sizeof(uint32_t));
uint32_t *pPointBuffer=(uint32_t *)malloc(sizeOfpBuffer);
memset(pPointBuffer,0,sizeOfpBuffer);
int32_t hierarchy[MAX_CNT][4];
void* cHandle = fcvFindContoursAllocate(dist_8u.cols);
fcvFindContoursExternalu8(textureless.data.ptr,
dist_8u.cols,
dist_8u.rows,
dist_8u.cols,
maxNumContours,
&pNumContours,
pNumContourPoints,
pContourStartPointsfind,
pPointBuffer,
sizeOfpBuffer,
hierarchy,
cHandle);
size_t n_TL = 0;
uint32_t** pContourStartPointsdraw = (uint32_t**)fcvMemAlloc(MAX_CNT*2*sizeof(uint32_t*),16);
uint32_t pNumDrawContourPoints[MAX_CNT] = {0};
uint32_t* dPointBuffer=(uint32_t *)malloc(sizeOfpBuffer);
uint32_t* start_contour = pPointBuffer;
uint32_t* start_contour_dPoint = dPointBuffer;
uint32_t** startFind_ptr = pContourStartPointsfind;
uint32_t** draw_ptr = pContourStartPointsdraw;
for (size_t i = 0; i < pNumContours; i++,startFind_ptr++)
{
int points_per_contour = pNumContourPoints[i];
double area = polyArea(start_contour,points_per_contour*2);
if(area < min_textureless_area)
{
start_contour = start_contour + points_per_contour*2;
continue;
}
*(draw_ptr) = *(startFind_ptr);
pNumDrawContourPoints[n_TL] = pNumContourPoints[i];
memcpy(start_contour_dPoint,start_contour,points_per_contour*2*sizeof(uint32_t));
start_contour_dPoint = start_contour_dPoint + points_per_contour*2;
start_contour = start_contour + points_per_contour*2;
n_TL++;
draw_ptr++;
}
uint32_t* holeflag = (uint32_t*)malloc(pNumContours*sizeof(uint32_t));
memset(holeflag,0,pNumContours*sizeof(uint32_t));
uint32_t bufferSize = 0;
start_contour_dPoint = dPointBuffer;
draw_ptr = pContourStartPointsdraw;
for(int i = 0; i < n_TL; i++)
{
int points_per_contour = pNumDrawContourPoints[i];
bufferSize = points_per_contour*2*sizeof(uint32_t);
fcvDrawContouru8(dist_fcv,
dist_8u.cols,
dist_8u.rows,
dist_8u.cols,
1,
holeflag,
&pNumDrawContourPoints[i],
(const uint32_t ** __restrict)(draw_ptr),
bufferSize,
start_contour_dPoint,
hierarchy,
1,1,i+1,0)
start_contour_dPoint = start_contour_dPoint + points_per_contour*2;
draw_ptr++;
}
free(pPointBuffer);
fcvFindContoursDelete(cHandle);
fcvMemFree(pContourStartPointsfind);

Related

VP8 C/C++ source, how to encode frames in ARGB format to frame instead of from file

I'm trying to get started with the VP8 library, I'm not building it in the standard way they tell you to, I just loaded all of the main files and the "encoder" folder into a new Visual Studio C++ DLL project, and just included the C files in an extern "C" dll export function, which so far builds fine etc., I just have no idea where to start with the C++ API to encode, say, 3 frames of ARGB data into a very basic video, just to get started
The only example I could find is in the examples folder called simple_encoder.c, although their premise is that they are loading in another file already and parsing its frames then converting it, so it seems a bit complicated, I just want to be able to pass in a byte array of a few ARGB frames and have it output a very simple VP8 video
I've seen How to encode series of images into VP8 using WebM VP8 Encoder API? (C/C++) but the accepted answer just links to the build instructions and references the general specification of the vp8 format, the closest I could find there is the example encoding parameters but I just want to do everything from C++ and I can't seem to find any other examples, besides for the default one simple_encoder.c?
Just to cite some of the relevant parts I think I understand, but still need more help on
//in int main...
...
vpx_image_t raw;
if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, info.frame_width,
info.frame_height, 1)) {
//"Failed to allocate image." error
}
So that part I think I understand for the most part, VPX_IMG_FMT_I420 is the only part that's not made in this file itself, but its in vpx_image.h, first as
#define VPX_IMG_FMT_PLANAR
//then after...
typedef enum vpx_img_fmt {
VPX_IMG_FMT_NONE,
VPX_IMG_FMT_RGB24, /**< 24 bit per pixel packed RGB */
///some other formats....
VPX_IMG_FMT_ARGB, /**< 32 bit packed ARGB, alpha=255 */
VPX_IMG_FMT_YV12 = VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_UV_FLIP | 1, /**< planar YVU */
VPX_IMG_FMT_I420 = VPX_IMG_FMT_PLANAR | 2,
} vpx_img_fmt_t; /**< alias for enum vpx_img_fmt */
So I guess part of my question is answered already just from writing this, that one of the formats is VPX_IMG_FMT_ARGB, although I don't where where it's defined, but I'm guessing in the above code I would replace it with
const VpxInterface *encoder = get_vpx_encoder_by_name("v8");
vpx_image_t raw;
VpxVideoInfo info = { 0, 0, 0, { 0, 0 } };
info.frame_width = 1920;
info.frame_height = 1080;
info.codec_fourcc = encoder->fourcc;
info.time_base.numerator = 1;
info.time_base.denominator = 24;
bool didIt = vpx_img_alloc(&raw, VPX_IMG_FMT_ARGB,
info.frame_width, info.frame_height/*example width and height*/, 1)
//check didIt..
vpx_codec_enc_cfg_t cfg;
vpx_codec_ctx_t codec;
vpx_codec_err_t res;
res = vpx_codec_enc_config_default(encoder->codec_interface(), &cfg, 0);
//check if !res for error
cfg.g_w = info.frame_width;
cfg.g_h = info.frame_height;
cfg.g_timebase.num = info.time_base.numerator;
cfg.g_timebase.den = info.time_base.denominator;
cfg.rc_target_bitrate = 200;
VpxVideoWriter *writer = NULL;
writer = vpx_video_writer_open(outfile_arg, kContainerIVF, &info);
//check if !writer for error
bool startIt = vpx_codec_enc_init(&codec, encoder->codec_interface(), &cfg, 0);
//not even sure where codec was set actually..
//check !startIt for error starting
//now the next part in the original is where it reads from the input file, but instead
//I need to pass in an array of some ARGB byte arrays..
//thing is, in the next step they use a while loop for
//vpx_img_read(&raw, fopen("path/to/YV12formatVideo", "rb"))
//to set the contents of the raw vpx image allocated earlier, then
//they call another program that writes it to the writer object,
//but I don't know how to read the actual ARGB data directly into the raw image
//without using fopen, so that's one question (review at end)
//so I'll just put a placeholder here for the **question**
//assuming I have an array of byte arrays stored individually
//for simplicity sake
int size = 1920 * 1080 * 4;
uint8_t imgOne[size] = {/*some big byte array*/};
uint8_t imgTwo[size] = {/*some big byte array*/};
uint8_t imgThree[size] = {/*some big byte array*/};
uint8_t *images[] = {imgOne, imgTwo, imgThree};
int framesDone = 0;
int maxFrames = 3;
//so now I can replace the while loop with a filler function
//until I find out how to set the raw image with ARGB data
while(framesDone < maxFrames) {
magicalFunctionToSetARGBOfRawImage(&raw, images[framesDone]);
encode_frame(&codec, &raw, framesDone, 0, writer);
framesDone++;
}
//now apparently it needs to be flushed after
while(encode_frame(&codec, 0, -1, 0, writer)){}
vpx_img_free(&raw);
bool isDestroyed = vpx_codec_destroy(&codec);
//check if !isDestroyed for error
//now we gotta define the encode_Frames function, but simpler
//(and make it above other function for reference purposes
//or in header
static int encode_frame(
vpx_codex_ctx_t *coydek,
vpx_image_t pic,
int currentFrame,
int flags,
VpxVideoWriter *koysayv/*writer*/
) {
//now to substitute their encodeFrame function for
//the actual raw calls to simplify things
const DidIt = vpx_codec_encode(
coydek,
pic,
currentFrame,
1,//duration I think
flags,//whatever that is
VPX_DL_REALTIME//different than simlpe_encoder
);
if(!DidIt) return;//error here
vpx_codec_iter_t iter = 0;
const vpx_codec_cx_pkt_t *pkt = 0;
int gotThings = 0;
while(
(pkt = vpx_codec_get_cx_data(
coydek,
&iter
)) != 0
) {
gotThings = 1;
if(
pkt->kind
== VPX_CODEC_CX_FRAME_PKT //don't exactly
//understand this part
) {
const
int
keyframe = (
pkt
->
data
.frame
.flags
&
VPX_FRAME_IS_KEY
) != 0; //don'texactly understand the
//& operator here or how it gets the keyframe
bool wroteFrame = vpx_video_writer_write_frame(
koysayv,
pkt->data.frame.buf
//I'm guessing this is the encoded
//frame data
,
pkt->data.frame.sz,
pkt->data.frame.pts
);
if(!wroteFrame) return; //error
}
}
return gotThings;
}
Thing is though, I don't know how to actually read the
ARGB data into the RAW image buffer itself, as mentioned
above, in the original example, they use
vpx_img_read(&raw, fopen("path/to/file", "rb"))
but if I'm starting off with the byte arrays themselves
then what function do I use for that instead of the file?
I have a feeling it can be solved by the source code for the vpx_img_read found in tools_common.c function:
int vpx_img_read(vpx_image_t *img, FILE *file) {
int plane;
for (plane = 0; plane < 3; ++plane) {
unsigned char *buf = img->planes[plane];
const int stride = img->stride[plane];
const int w = vpx_img_plane_width(img, plane) *
((img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) ? 2 : 1);
const int h = vpx_img_plane_height(img, plane);
int y;
for (y = 0; y < h; ++y) {
if (fread(buf, 1, w, file) != (size_t)w) return 0;
buf += stride;
}
}
return 1;
}
although I personally am not experienced enough to necessarily know how to get a single frames ARGB data in, I think the key part is fread(buf, 1, w, file) which seems to read parts of file into buf which represents img->planes[plane];, which I think then by reading into buf that automatically reads into img->planes[plane];, but I'm not sure if that is the case, and also not sure how to replace the fread from file to just take in a bye array that is alreasy loaded into memory...
VPX_IMG_FMT_ARGB is not defined because not supported by libvpx (as far as I have seen). To compress an image using this library, you must first convert it to one of the supported format, like I420 (VPX_IMG_FMT_I420). The code here (not mine) : https://gist.github.com/racerxdl/8164330 do it well for the RGB format. If you don't want to use libswscale to make the conversion from RGB to I420, you can do things like this (this code convert a RGBA array of bytes to a I420 vpx_image that can be use by libvpx):
unsigned int tx = <width of your image>
unsigned int ty = <height of your image>
unsigned char *image = <array of bytes : RGBARGBA... of size ty*tx*4>
vpx_image_t *imageVpx = <result that must have been properly initialized by libvpx>
imageVpx->stride[VPX_PLANE_U ] = tx/2;
imageVpx->stride[VPX_PLANE_V ] = tx/2;
imageVpx->stride[VPX_PLANE_Y ] = tx;
imageVpx->stride[VPX_PLANE_ALPHA] = tx;
imageVpx->planes[VPX_PLANE_U ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_V ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_Y ] = new unsigned char[ty*tx ];
imageVpx->planes[VPX_PLANE_ALPHA] = new unsigned char[ty*tx ];
unsigned char *planeY = imageVpx->planes[VPX_PLANE_Y ];
unsigned char *planeU = imageVpx->planes[VPX_PLANE_U ];
unsigned char *planeV = imageVpx->planes[VPX_PLANE_V ];
unsigned char *planeA = imageVpx->planes[VPX_PLANE_ALPHA];
for (unsigned int y=0; y<ty; y++)
{
if (!(y % 2))
{
for (unsigned int x=0; x<tx; x+=2)
{
int r = *image++;
int g = *image++;
int b = *image++;
int a = *image++;
*planeY++ = max(0, min(255, (( 66*r + 129*g + 25*b) >> 8) + 16));
*planeU++ = max(0, min(255, ((-38*r + -74*g + 112*b) >> 8) + 128));
*planeV++ = max(0, min(255, ((112*r + -94*g + -18*b) >> 8) + 128));
*planeA++ = a;
r = *image++;
g = *image++;
b = *image++;
a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
else
{
for (unsigned int x=0; x<tx; x++)
{
int const r = *image++;
int const g = *image++;
int const b = *image++;
int const a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
}

Improve speed saving texture to file using Directx 11 c++

I have a DirectX11 based render, and I need to save a lot of rendered images to hard disk. I have used SaveWICTextureToFile but takes 0.2 seconds to save each image.
Images are saved in resolution 1024x768.
Here it is the code to save the images:
ComPtr<ID3D11Texture2D> backBuffer;
HRESULT hr = _swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<LPVOID*>(backBuffer.GetAddressOf()));
throwIfFail(hr, "Unable to get a buffer");
#ifdef LOG
auto end = high_resolution_clock::now();
wchar_t str[256];
auto tmp = end;
#endif
hr = SaveWICTextureToFile(_context.Get(), backBuffer.Get(), GUID_ContainerFormatJpeg, w.c_str()/*,&GUID_WICPixelFormat32bppBGRA*/);
//hr = SaveDDSTextureToFile(_context.Get(), backBuffer.Get(), w.c_str()/*,&GUID_WICPixelFormat32bppBGRA*/);
#ifdef LOG
end = high_resolution_clock::now();
wsprintf(str, L"DXRender::saveLastRenderToFile: %d \n", duration_cast<microseconds>(end - tmp).count());
OutputDebugString(str);
tmp = end;
#endif
throwIfFail(hr, "Unable to save buffer");
How can I reduce the time it takes to save each image?
I have tested the libJPEG and libPNG libraries to save images, and it work fine and faster than SaveWICToTextureFile, the only trick here is that need to be in account the format of the DirectX texture..
Example:
Texture format: B8G8R8A8
Then to access to each color need to get the n-th 8 bits to get each channel..
This is the only trick here..
Example:
auto lastText = _textureDesc;
_inputTexture->GetDesc(&_textureDesc);
unsigned char r, g, b;
int _textureRowSize = _mappedResource.RowPitch / sizeof(unsigned char);
int hIdx = 0;
for (int i = 0; i < _textureDesc.Height; ++i)
{
int wIdx = 0;
for (int j = 0; j < _textureDesc.Width; ++j)
{
r = (unsigned char)textPtr[hIdx + wIdx + 2];
g = (unsigned char)textPtr[hIdx + wIdx + 1];
b = (unsigned char)textPtr[hIdx + wIdx];
//do whatever you want with this values...
wIdx += 4;
}
hIdx += _textureRowSize;
}

Xaudio2 pop sound when changing buffer or looping

I have a simple program that plays a sine wave.
At the end of the buffer I get a pop sound.
If I try to loop I get the pop sound between each loop.
If I alternate between buffers I get the pop sound.
struct win32_audio_buffer
{
XAUDIO2_BUFFER XAudioBuffer = {};
int16 *Memory;
};
struct win32_audio_setteings
{
int32 SampleRate = 44100;
int32 ToneHz = 200;
int32 Channels = 2;
int32 LoopTime = 10;
int32 TotalSamples = SampleRate * LoopTime;
};
win32_audio_setteings AudioSetteings;
win32_audio_buffer MainAudioBuffer;
win32_audio_buffer SecondaryAudioBuffer;
IXAudio2SourceVoice* pSourceVoice;
internal void Win32InitXaudio2()
{
WAVEFORMATEX WaveFormat = {};
WaveFormat.wFormatTag = WAVE_FORMAT_PCM;
WaveFormat.nChannels = AudioSetteings.Channels;
WaveFormat.nSamplesPerSec = AudioSetteings.SampleRate;
WaveFormat.wBitsPerSample = 16;
WaveFormat.nBlockAlign = (WaveFormat.nChannels * WaveFormat.wBitsPerSample) / 8;
WaveFormat.nAvgBytesPerSec = WaveFormat.nSamplesPerSec * WaveFormat.nBlockAlign;
WaveFormat.cbSize = 0;
IXAudio2* pXAudio2;
IXAudio2MasteringVoice* pMasterVoice;
XAudio2Create(&pXAudio2);
pXAudio2->CreateMasteringVoice(&pMasterVoice);
pXAudio2->CreateSourceVoice(&pSourceVoice, &WaveFormat);
}
//DOC: AudioBytes - Size of the audio data
//DOC: pAudioData - The buffer start loaction (Needs to be type cast into BYTE pointer)
internal void Win32CreateAudioBuffer(win32_audio_buffer *AudioBuffer)
{
int32 Size = (int16)sizeof(int16) * AudioSetteings.Channels * AudioSetteings.SampleRate * AudioSetteings.LoopTime;
AudioBuffer->Memory = (int16 *)VirtualAlloc(0, Size, MEM_COMMIT|MEM_RESERVE, PAGE_READWRITE);
AudioBuffer->XAudioBuffer.AudioBytes = Size;
AudioBuffer->XAudioBuffer.pAudioData = (BYTE *) AudioBuffer->Memory;
//AudioBuffer->XAudioBuffer.Flags = XAUDIO2_END_OF_STREAM;
AudioBuffer->XAudioBuffer.PlayBegin = 0;
AudioBuffer->XAudioBuffer.PlayLength = AudioSetteings.TotalSamples;
//AudioBuffer->XAudioBuffer.LoopCount = 10;
}
internal void Win32Playback(win32_audio_buffer *AudioBuffer)
{
for (int32 Index = 0, Sample = 0; Sample < AudioSetteings.TotalSamples; Sample++)
{
real32 Sine = sinf(Sample * 2.0f * Pi32 / AudioSetteings.ToneHz);
int16 value = (int16)(4000 * Sine);
AudioBuffer->Memory[Index++] = value;
AudioBuffer->Memory[Index++] = value;
}
pSourceVoice->SubmitSourceBuffer(&AudioBuffer->XAudioBuffer);
}
Win32InitXaudio2();
Win32CreateAudioBuffer(&MainAudioBuffer);
//Win32CreateAudioBuffer(&SecondaryAudioBuffer);
Win32Playback(&MainAudioBuffer);
//Win32Playback(&SecondaryAudioBuffer);
pSourceVoice->Start(0);
I have posted the relevant code here and it just play one sine buffer.
I tried altrantaing buffers and to start and end on a zero-crossing.
I had a similar problem.
Maybe it will help someone.
The problem is in allocating more memory for audio than needed.
So I tried something like this and found the problem (this is not solution I just show how I found problem! Probably, if it will not help in your case, then the problem somewhere else)
// XAUDIO2_BUFFER m_xaudio2Buffer...
m_xaudio2Buffer.pAudioData = source->m_data;
m_xaudio2Buffer.AudioBytes = source->m_dataSize - 100; // -100 and `pop` sound is gone
m_xaudio2Buffer.Flags = XAUDIO2_END_OF_STREAM;

Loading non-power-of-two textures in Vulkan

My 2D texture loader works fine if my texture dimensions are power-of-two, but when they are not, the texture data displays as skewed. How do I fix this? I assume the issue has something to do with memory alignment and row pitch. Here's relevant parts of my loader code:
VkMemoryRequirements memReqs;
vkGetImageMemoryRequirements( GfxDeviceGlobal::device, mappableImage, &memReqs );
VkMemoryAllocateInfo memAllocInfo = {};
memAllocInfo.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
memAllocInfo.pNext = nullptr;
memAllocInfo.memoryTypeIndex = 0;
memAllocInfo.allocationSize = memReqs.size;
GetMemoryType( memReqs.memoryTypeBits, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, &memAllocInfo.memoryTypeIndex );
VkDeviceMemory mappableMemory;
err = vkAllocateMemory( GfxDeviceGlobal::device, &memAllocInfo, nullptr, &mappableMemory );
CheckVulkanResult( err, "vkAllocateMemory in Texture2D" );
err = vkBindImageMemory( GfxDeviceGlobal::device, mappableImage, mappableMemory, 0 );
CheckVulkanResult( err, "vkBindImageMemory in Texture2D" );
VkImageSubresource subRes = {};
subRes.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
subRes.mipLevel = 0;
subRes.arrayLayer = 0;
VkSubresourceLayout subResLayout;
vkGetImageSubresourceLayout( GfxDeviceGlobal::device, mappableImage, &subRes, &subResLayout );
void* mapped;
err = vkMapMemory( GfxDeviceGlobal::device, mappableMemory, 0, memReqs.size, 0, &mapped );
CheckVulkanResult( err, "vkMapMemory in Texture2D" );
const int bytesPerPixel = 4;
std::size_t dataSize = bytesPerPixel * width * height;
std::memcpy( mapped, data, dataSize );
vkUnmapMemory( GfxDeviceGlobal::device, mappableMemory );
The VkSubresourceLayout, which you obtained from vkGetImageSubresourceLayout will contain the pitch of the texture in the rowPitch member. It's more than likely not equal to the width, thus, when you do a memcpy of the entire data block, you're copying relevant data into the padding section of the texture.
Instead you will need to memcpy row-by-row, skipping the padding memory in the mapped texture:
const int bytesPerPixel = 4;
std::size_t dataRowSize = bytesPerPixel * width;
char* mappedBytes = (char*)mapped;
for(int i = 0; i < height; ++i)
{
std::memcpy(mapped, data, dataSize);
mappedBytes += rowPitch;
data += dataRowSize;
}
(this code assumes data is a char * as well - its declaration wasn't given)
for(int i = 0; i < height; ++i)
{
std::memcpy(mappedBytes, data, dataRowSize);
mappedBytes += layout.rowPitch;
data += dataRowSize;
}

Neopixel arduino fading from colour to colour using a Sparkcore

This question is a follow from another question I asked here that was answered.
I have the following function:
MotiColor startColor;
MotiColor endColor;
void setup()
{
// Begin strip.
strip.begin();
// Initialize all pixels to 'off'.
strip.show();
Serial1.begin(9600);
startColor = MotiColor(0, 0, 0);
endColor = MotiColor(0, 0, 0);
}
void loop () {
}
int tinkerSetColour(String command)
{
strip.show();
int commaIndex = command.indexOf(',');
int secondCommaIndex = command.indexOf(',', commaIndex+1);
int lastCommaIndex = command.lastIndexOf(',');
String red = command.substring(0, commaIndex);
String grn = command.substring(commaIndex+1, secondCommaIndex);
String blu = command.substring(lastCommaIndex+1);
startColor = MotiColor(red.toInt(), grn.toInt(), blu.toInt());
int16_t redDiff = endColor.getR() - startColor.getR();
int16_t greenDiff = endColor.getG() - startColor.getG();
int16_t blueDiff = endColor.getB() - startColor.getB();
int16_t _delay = 500;
int16_t duration = 3500;
int16_t steps = duration / _delay;
int16_t redValue, greenValue, blueValue;
for (int16_t i = steps; i >= 0; i--) {
redValue = (int16_t)startColor.getR() + (redDiff * i / steps);
greenValue = (int16_t)startColor.getG() + (greenDiff * i / steps);
blueValue = (int16_t)startColor.getB() + (blueDiff * i / steps);
sprintf(rgbString, "%i,%i,%i", redValue, greenValue, blueValue);
Spark.publish("rgb", rgbString);
for (uint16_t i = 0; i < strip.numPixels(); i++) {
strip.setPixelColor(i, strip.Color(redValue, greenValue, blueValue));
}
delay(_delay);
}
delay(_delay);
for (uint16_t i = 0; i < strip.numPixels(); i++) {
strip.setPixelColor(i, strip.Color(endColor.getR(), endColor.getG(), endColor.getB()));
}
delay(_delay);
endColor = MotiColor(startColor.getR(), startColor.getG(), startColor.getB());
return 1;
}
I am seeing the published results correctly:
This is from OFF (0,0,0) -> RED (255,0,0) -> GREEN (0,255,0).
It works fine when I publish the results back to a web console via the Spark.publish() event, however the actual Neopixel LED's don't fade from colour to colour as expected. They just change from colour to colour instead of fading nicely between themselves.
I'm just wondering where I'm going wrong or how I can improve my code so that I actually see the fading in real time.
You have to call strip.show() in your for loop, like so:
for (int16_t i = steps; i >= 0; i--) {
redValue = (int16_t)startColor.getR() + (redDiff * i / steps);
greenValue = (int16_t)startColor.getG() + (greenDiff * i / steps);
blueValue = (int16_t)startColor.getB() + (blueDiff * i / steps);
sprintf(rgbString, "%i,%i,%i", redValue, greenValue, blueValue);
Spark.publish("rgb", rgbString);
for (uint16_t i = 0; i < strip.numPixels(); i++) {
strip.setPixelColor(i, strip.Color(redValue, greenValue, blueValue));
}
// !!! Without this, you'll only see the result the next time you call
// tinkerSetColor() !!!
strip.show();
delay(_delay);
}
To understand what's happening, you can look at the NeoPixel library source. You'll see that strip.setPixelColor() only stores the RGB value in memory (think of it as a drawing buffer, so that you can update the whole strip at once, which makes sense if you look at how the controller chips work). Calling strip.show() causes the routine that will push the values out to each pixel in serial to run.