tex1Dfetch unexpectedly returning 0 - c++

I don't believe this is the same issue as reported here :
Bound CUDA texture reads zero
CUDA 1D texture fetch always return 0
In my CUDA application I noticed that tex1Dfetch is not returning the expected value, past a certain index in the buffer. An initial observation in the application was that a value at index 0 could be read correctly, but at 12705625, the value read was 0. I made a small test program to investigate this, given below. The results are a little bit baffling to me. I'm trying to probe at what index the values no longer are read correctly. But as the value arraySize is changed, so does the "firstBadIndex". Even with arraySize =2, the second value is read incorrectly! As arraySize is made bigger, the firstBadIndex gets bigger. This happens when binding to arrays of float, float2, or float4. If the data are read from the device buffer instead (switch around the commented lines in FetchTextureData), then everything is fine. This is using CUDA 6.5, on a Tesla c2075.
Thanks for any insights or advice you might have.
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <stdio.h>
#define FLOATTYPE float4
texture<FLOATTYPE,cudaTextureType1D,cudaReadModeElementType> texture1D;
const unsigned int arraySize = 1000;
FLOATTYPE* host;
FLOATTYPE* device;
FLOATTYPE* dTemp;
FLOATTYPE hTemp[1];
__global__ void FetchTextureData(FLOATTYPE* data,FLOATTYPE* arr,int idx)
{
data[0] = tex1Dfetch(texture1D, idx);
//data[0] = arr[idx];
}
bool GetTextureValues(int idx){
FetchTextureData<<<1,1>>>(dTemp,device,idx);
// copy to the host
cudaError_t err = cudaMemcpy(hTemp,dTemp,sizeof(FLOATTYPE),cudaMemcpyDeviceToHost);
if (err != cudaSuccess) {
throw "cudaMemcpy failed!";
}
if (cudaDeviceSynchronize() != cudaSuccess) {
throw "cudaDeviceSynchronize failed!";
}
return hTemp[0].x == 1.0f;
}
int main()
{
try{
host = new FLOATTYPE[arraySize];
cudaError_t err = cudaMalloc((void**)&device,sizeof(FLOATTYPE) * arraySize);
cudaError_t err1 = cudaMalloc((void**)&dTemp,sizeof(FLOATTYPE));
if (err != cudaSuccess || err1 != cudaSuccess) {
throw "cudaMalloc failed!";
}
// make some host data
for(unsigned int i=0; i<arraySize; i++){
FLOATTYPE data = {1.0f, 0.0f, 0.0f, 0.0f};
host[i] = data;
}
// and copy it to the device
err = cudaMemcpy(device,host,sizeof(FLOATTYPE) * arraySize,cudaMemcpyHostToDevice);
if (err != cudaSuccess){
throw "cudaMemcpy failed!";
}
// set up the textures
cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<FLOATTYPE>();
texture1D.addressMode[0] = cudaAddressModeClamp;
texture1D.filterMode = cudaFilterModePoint;
texture1D.normalized = false;
cudaBindTexture(NULL, texture1D, device, channelDesc, arraySize);
// do a texture fetch and find where the fetches stop working
int lastGoodValue = -1, firstBadValue = -1;
float4 badValue = {-1.0f,0.0f,0.0f,0.0f};
for(unsigned int i=0; i<arraySize; i++){
if(i % 100000 == 0) printf("%d\n",i);
bool isGood = GetTextureValues(i);
if(firstBadValue == -1 && !isGood)
firstBadValue = i;
if(isGood)
lastGoodValue = i;
else
badValue = hTemp[0];
}
printf("lastGoodValue %d, firstBadValue %d\n",lastGoodValue,firstBadValue);
printf("Bad value is (%.2f)\n",badValue.x);
}catch(const char* err){
printf("\nCaught an error : %s\n",err);
}
return 0;
}

The problem lies in the texture set up. This:
cudaBindTexture(NULL, texture1D, device, channelDesc, arraySize);
should be:
cudaBindTexture(NULL, texture1D, device, channelDesc,
arraySize * sizeof(FLOATTYPE));
As per the documentation, the size argument is the size of the memory area in bytes, not the number of elements. I would have expected that with the clamped addressing mode, the code would still work as expected. With border mode, you should get a zero value which looks like it would trigger your bad value detection. I haven't actually run your code, so perhaps there is a subtley I'm missing somewhere. For such a simple repro case, your code structure is rather convoluted and hard to follow (at least on the mobile phone screen I am reading it on).
EDIT to add that between the time I started writing this and finished, #njuffa pointed out the same mistake in comments

Related

Why does adding audio stream to ffmpeg's libavcodec output container cause a crash?

As it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.
My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt). Where st->internal->priv_pts->val = pkt->dts; is attempted, priv_pts is nullptr.
I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.
My MediaContentMgr is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.
The code that, when added, triggers the nullptr exception, is called out inline
The .h:
#ifndef _API_EXAMPLE_H
#define _API_EXAMPLE_H
#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include "glm/glm.hpp"
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>
#include <libavutil/opt.h>
#include <libswscale/swscale.h>
}
#include "shader_s.h"
class MediaContainerMgr {
public:
MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
const glm::vec3* extents);
~MediaContainerMgr();
void render();
bool recording() { return m_recording; }
// Major thanks to "shi-yan" who helped make this possible:
// https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp
bool init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height);
bool output_video_frame(uint8_t* buf);
bool finalize_output();
private:
AVFormatContext* m_format_context;
AVCodec* m_video_codec;
AVCodec* m_audio_codec;
AVCodecParameters* m_video_codec_parameters;
AVCodecParameters* m_audio_codec_parameters;
AVCodecContext* m_codec_context;
AVFrame* m_frame;
AVPacket* m_packet;
uint32_t m_video_stream_index;
uint32_t m_audio_stream_index;
void init_rendering(const glm::vec3* extents);
int decode_packet();
// For writing the output video:
void free_output_assets();
bool m_recording;
AVOutputFormat* m_output_format;
AVFormatContext* m_output_format_context;
AVCodec* m_output_video_codec;
AVCodecContext* m_output_video_codec_context;
AVFrame* m_output_video_frame;
SwsContext* m_output_scale_context;
AVStream* m_output_video_stream;
AVCodec* m_output_audio_codec;
AVStream* m_output_audio_stream;
AVCodecContext* m_output_audio_codec_context;
};
#endif
And, the hellish .cpp:
#include <stdio.h>
#include <stdarg.h>
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
#include "media_container_manager.h"
MediaContainerMgr::MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
const glm::vec3* extents) :
m_video_stream_index(-1),
m_audio_stream_index(-1),
m_recording(false),
m_output_format(nullptr),
m_output_format_context(nullptr),
m_output_video_codec(nullptr),
m_output_video_codec_context(nullptr),
m_output_video_frame(nullptr),
m_output_scale_context(nullptr),
m_output_video_stream(nullptr)
{
// AVFormatContext holds header info from the format specified in the container:
m_format_context = avformat_alloc_context();
if (!m_format_context) {
throw "ERROR could not allocate memory for Format Context";
}
// open the file and read its header. Codecs are not opened here.
if (avformat_open_input(&m_format_context, infile.c_str(), NULL, NULL) != 0) {
throw "ERROR could not open input file for reading";
}
printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);
//read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.
if (avformat_find_stream_info(m_format_context, NULL) < 0) {
throw "ERROR could not get stream info";
}
for (unsigned int i = 0; i < m_format_context->nb_streams; i++) {
AVCodecParameters* local_codec_parameters = NULL;
local_codec_parameters = m_format_context->streams[i]->codecpar;
printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);
printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);
printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);
printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);
printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);
AVCodec* local_codec = NULL;
local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);
if (local_codec == NULL) {
throw "ERROR unsupported codec!";
}
if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {
if (m_video_stream_index == -1) {
m_video_stream_index = i;
m_video_codec = local_codec;
m_video_codec_parameters = local_codec_parameters;
}
m_height = local_codec_parameters->height;
m_width = local_codec_parameters->width;
printf("Video Codec: resolution %dx%d\n", m_width, m_height);
}
else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {
if (m_audio_stream_index == -1) {
m_audio_stream_index = i;
m_audio_codec = local_codec;
m_audio_codec_parameters = local_codec_parameters;
}
printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);
}
printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);
}
m_codec_context = avcodec_alloc_context3(m_video_codec);
if (!m_codec_context) {
throw "ERROR failed to allocate memory for AVCodecContext";
}
if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) < 0) {
throw "ERROR failed to copy codec params to codec context";
}
if (avcodec_open2(m_codec_context, m_video_codec, NULL) < 0) {
throw "ERROR avcodec_open2 failed to open codec";
}
m_frame = av_frame_alloc();
if (!m_frame) {
throw "ERROR failed to allocate AVFrame memory";
}
m_packet = av_packet_alloc();
if (!m_packet) {
throw "ERROR failed to allocate AVPacket memory";
}
}
MediaContainerMgr::~MediaContainerMgr() {
avformat_close_input(&m_format_context);
av_packet_free(&m_packet);
av_frame_free(&m_frame);
avcodec_free_context(&m_codec_context);
glDeleteVertexArrays(1, &m_VAO);
glDeleteBuffers(1, &m_VBO);
}
bool MediaContainerMgr::advance_frame() {
while (true) {
if (av_read_frame(m_format_context, m_packet) < 0) {
// Do we actually need to unref the packet if it failed?
av_packet_unref(m_packet);
continue;
//return false;
}
else {
if (m_packet->stream_index == m_video_stream_index) {
//printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);
int response = decode_packet();
av_packet_unref(m_packet);
if (response != 0) {
continue;
//return false;
}
return true;
}
else {
printf("m_packet->stream_index: %d\n", m_packet->stream_index);
printf(" m_packet->pts: %lld\n", m_packet->pts);
printf(" mpacket->size: %d\n", m_packet->size);
if (m_recording) {
int err = 0;
//err = avcodec_send_packet(m_output_video_codec_context, m_packet);
printf(" encoding error: %d\n", err);
}
}
}
// We're done with the packet (it's been unpacked to a frame), so deallocate & reset to defaults:
/*
if (m_frame == NULL)
return false;
if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {
printf("WARNING: null frame data");
continue;
}
*/
}
}
int MediaContainerMgr::decode_packet() {
// Supply raw packet data as input to a decoder
// https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
int response = avcodec_send_packet(m_codec_context, m_packet);
if (response < 0) {
char buf[256];
av_strerror(response, buf, 256);
printf("Error while receiving a frame from the decoder: %s\n", buf);
return response;
}
// Return decoded output data (into a frame) from a decoder
// https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
response = avcodec_receive_frame(m_codec_context, m_frame);
if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
return response;
} else if (response < 0) {
char buf[256];
av_strerror(response, buf, 256);
printf("Error while receiving a frame from the decoder: %s\n", buf);
return response;
} else {
printf(
"Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",
m_codec_context->frame_number,
av_get_picture_type_char(m_frame->pict_type),
m_frame->pkt_size,
m_frame->pts,
m_frame->key_frame,
m_frame->coded_picture_number
);
}
return 0;
}
bool MediaContainerMgr::init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height) {
if (m_recording)
return true;
m_recording = true;
advance_to(0L); // I've deleted the implmentation. Just seeks to beginning of vid. Works fine.
if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {
printf("Cannot guess output format.\n");
return false;
}
int err = avformat_alloc_output_context2(&m_output_format_context, m_output_format, nullptr, video_file_name.c_str());
if (err < 0) {
printf("Failed to allocate output context.\n");
return false;
}
//TODO(P0): Break out the video and audio inits into their own methods.
m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);
if (!m_output_video_codec) {
printf("Failed to create video codec.\n");
return false;
}
m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);
if (!m_output_video_stream) {
printf("Failed to find video format.\n");
return false;
}
m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);
if (!m_output_video_codec_context) {
printf("Failed to create video codec context.\n");
return(false);
}
m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;
m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
m_output_video_stream->codecpar->width = width;
m_output_video_stream->codecpar->height = height;
m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;
// Use the same bit rate as the input stream.
m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;
m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;
avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);
m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;
//TODO(P1): Set these to match the input stream?
m_output_video_codec_context->max_b_frames = 2;
m_output_video_codec_context->gop_size = 12;
m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;
//m_output_codec_context->refcounted_frames = 0;
if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {
av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
} else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {
av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
} else {
av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);
}
avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);
m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);
if (!m_output_audio_codec) {
printf("Failed to create audio codec.\n");
return false;
}
I've commented out all of the audio stream init beyond this next line, because this is where
the trouble begins. Creating this output stream causes the null reference I mentioned. If I
uncomment everything below here, I still get the null deref. If I comment out this line, the
deref exception vanishes. (IOW, I commented out more and more code until I found that this
was the trigger that caused the problem.)
I assume that there's something I'm doing wrong in the rest of the commented out code, that,
when fixed, will fix the nullptr and give me a working audio stream.
m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);
if (!m_output_audio_stream) {
printf("Failed to find audio format.\n");
return false;
}
/*
m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);
if (!m_output_audio_codec_context) {
printf("Failed to create audio codec context.\n");
return(false);
}
m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;
m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;
m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;
m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;
avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);
m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;
*/
//TODO(P2): Free assets that have been allocated.
err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);
if (err < 0) {
printf("Failed to open codec.\n");
return false;
}
if (!(m_output_format->flags & AVFMT_NOFILE)) {
err = avio_open(&m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);
if (err < 0) {
printf("Failed to open output file.");
return false;
}
}
err = avformat_write_header(m_output_format_context, NULL);
if (err < 0) {
printf("Failed to write header.\n");
return false;
}
av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);
return true;
}
//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)
void PrepareFlipFrameJ420(AVFrame* pFrame) {
for (int i = 0; i < 4; i++) {
if (i)
pFrame->data[i] += pFrame->linesize[i] * ((pFrame->height >> 1) - 1);
else
pFrame->data[i] += pFrame->linesize[i] * (pFrame->height - 1);
pFrame->linesize[i] = -pFrame->linesize[i];
}
}
This is where we take an altered frame and write it to the output container. This works fine
as long as we haven't set up an audio stream in the output container.
bool MediaContainerMgr::output_video_frame(uint8_t* buf) {
int err;
if (!m_output_video_frame) {
m_output_video_frame = av_frame_alloc();
m_output_video_frame->format = AV_PIX_FMT_YUV420P;
m_output_video_frame->width = m_output_video_codec_context->width;
m_output_video_frame->height = m_output_video_codec_context->height;
err = av_frame_get_buffer(m_output_video_frame, 32);
if (err < 0) {
printf("Failed to allocate output frame.\n");
return false;
}
}
if (!m_output_scale_context) {
m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height,
AV_PIX_FMT_RGB24,
m_output_video_codec_context->width, m_output_video_codec_context->height,
AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
}
int inLinesize[1] = { 3 * m_output_video_codec_context->width };
sws_scale(m_output_scale_context, (const uint8_t* const*)&buf, inLinesize, 0, m_output_video_codec_context->height,
m_output_video_frame->data, m_output_video_frame->linesize);
PrepareFlipFrameJ420(m_output_video_frame);
//TODO(P0): Switch m_frame to be m_input_video_frame so I don't end up using the presentation timestamp from
// an audio frame if I threadify the frame reading.
m_output_video_frame->pts = m_frame->pts;
printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,
m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);
err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);
if (err < 0) {
printf(" ERROR sending new video frame output: ");
switch (err) {
case AVERROR(EAGAIN):
printf("AVERROR(EAGAIN): %d\n", err);
break;
case AVERROR_EOF:
printf("AVERROR_EOF: %d\n", err);
break;
case AVERROR(EINVAL):
printf("AVERROR(EINVAL): %d\n", err);
break;
case AVERROR(ENOMEM):
printf("AVERROR(ENOMEM): %d\n", err);
break;
}
return false;
}
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = nullptr;
pkt.size = 0;
pkt.flags |= AV_PKT_FLAG_KEY;
int ret = 0;
if ((ret = avcodec_receive_packet(m_output_video_codec_context, &pkt)) == 0) {
static int counter = 0;
printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags & AV_PKT_FLAG_KEY, pkt.size, counter++);
uint8_t* size = ((uint8_t*)pkt.data);
printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);
av_interleaved_write_frame(m_output_format_context, &pkt);
}
printf("push: %d\n", ret);
av_packet_unref(&pkt);
return true;
}
bool MediaContainerMgr::finalize_output() {
if (!m_recording)
return true;
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = nullptr;
pkt.size = 0;
for (;;) {
avcodec_send_frame(m_output_video_codec_context, nullptr);
if (avcodec_receive_packet(m_output_video_codec_context, &pkt) == 0) {
av_interleaved_write_frame(m_output_format_context, &pkt);
printf("final push:\n");
} else {
break;
}
}
av_packet_unref(&pkt);
av_write_trailer(m_output_format_context);
if (!(m_output_format->flags & AVFMT_NOFILE)) {
int err = avio_close(m_output_format_context->pb);
if (err < 0) {
printf("Failed to close file. err: %d\n", err);
return false;
}
}
return true;
}
EDIT
The call stack on the crash (which I should have included in the original question):
avformat-58.dll!compute_muxer_pkt_fields(AVFormatContext * s, AVStream * st, AVPacket * pkt) Line 630 C
avformat-58.dll!write_packet_common(AVFormatContext * s, AVStream * st, AVPacket * pkt, int interleaved) Line 1122 C
avformat-58.dll!write_packets_common(AVFormatContext * s, AVPacket * pkt, int interleaved) Line 1186 C
avformat-58.dll!av_interleaved_write_frame(AVFormatContext * s, AVPacket * pkt) Line 1241 C
CamBot.exe!MediaContainerMgr::output_video_frame(unsigned char * buf) Line 553 C++
CamBot.exe!main() Line 240 C++
If I move the call to avformat_write_header so it's immediately before the audio stream initialization, I still get a crash, but in a different place. The crash happens on line 6459 of movenc.c, where we have:
/* Non-seekable output is ok if using fragmentation. If ism_lookahead
* is enabled, we don't support non-seekable output at all. */
if (!(s->pb->seekable & AVIO_SEEKABLE_NORMAL) && // CRASH IS HERE
(!(mov->flags & FF_MOV_FLAG_FRAGMENT) || mov->ism_lookahead)) {
av_log(s, AV_LOG_ERROR, "muxer does not support non seekable output\n");
return AVERROR(EINVAL);
}
The exception is a nullptr exception, where s->pb is NULL. The call stack is:
avformat-58.dll!mov_init(AVFormatContext * s) Line 6459 C
avformat-58.dll!init_muxer(AVFormatContext * s, AVDictionary * * options) Line 407 C
[Inline Frame] avformat-58.dll!avformat_init_output(AVFormatContext *) Line 489 C
avformat-58.dll!avformat_write_header(AVFormatContext * s, AVDictionary * * options) Line 512 C
CamBot.exe!MediaContainerMgr::init_video_output(const std::string & video_file_name, unsigned int width, unsigned int height) Line 424 C++
CamBot.exe!main() Line 183 C++
Please note that you should always try to provide a self-contained minimal working example to make it easier for others to help. With the actual code, the matching FFmpeg version, and an input video that triggers the segmentation fault (to be sure), the issue would be a matter of analyzing the control flow to identify why st->internal->priv_pts was not allocated. Without the full scenario, I have to report to making assumptions that may or may not correspond to your actual code.
Based on your description, I attempted to reproduce the issue by cloning https://github.com/FFmpeg/FFmpeg.git and creating a new branch from commit b52e0d95 (November 4, 2020) to approximate your FFmpeg version.
I recreated your scenario using the provided code snippets by
including the avformat_new_stream() call for the audio stream
keeping the remaining audio initialization commented out
including the original avformat_write_header() call site (unchanged order)
With that scenario, the video write with MP4 video/audio input fails in avformat_write_header():
[mp4 # 0x2b39f40] sample rate not set 0
The call stack of the error location:
#0 0x00007ffff75253d7 in raise () from /lib64/libc.so.6
#1 0x00007ffff7526ac8 in abort () from /lib64/libc.so.6
#2 0x000000000094feca in init_muxer (s=0x2b39f40, options=0x0) at libavformat/mux.c:309
#3 0x00000000009508f4 in avformat_init_output (s=0x2b39f40, options=0x0) at libavformat/mux.c:490
#4 0x0000000000950a10 in avformat_write_header (s=0x2b39f40, options=0x0) at libavformat/mux.c:514
[...]
In init_muxer(), the sample rate in the stream parameters is checked unconditionally:
case AVMEDIA_TYPE_AUDIO:
if (par->sample_rate <= 0) {
av_log(s, AV_LOG_ERROR, "sample rate not set %d\n", par->sample_rate); abort();
ret = AVERROR(EINVAL);
goto fail;
}
That condition has been in effect since 2014-06-18 at the very least (didn't go back any further) and still exists. With a version from November 2020, the check must be active and the parameter must be set accordingly.
If I uncomment the remaining audio initialization, the situation remains unchanged (as expected). So, satisfy the condition, I added the missing parameter as follows:
m_output_audio_stream->codecpar->sample_rate =
m_format_context->streams[m_audio_stream_index]->codecpar->sample_rate;
With that, the check succeeds, avformat_write_header() succeeds, and the actual video write succeeds.
As you indicated in your question, the segmentation fault is caused by st->internal->priv_pts being NULL at this location:
#0 0x00000000009516db in compute_muxer_pkt_fields (s=0x2b39f40, st=0x2b3a580, pkt=0x7fffffffe2d0) at libavformat/mux.c:632
#1 0x0000000000953128 in write_packet_common (s=0x2b39f40, st=0x2b3a580, pkt=0x7fffffffe2d0, interleaved=1) at libavformat/mux.c:1125
#2 0x0000000000953473 in write_packets_common (s=0x2b39f40, pkt=0x7fffffffe2d0, interleaved=1) at libavformat/mux.c:1188
#3 0x0000000000953634 in av_interleaved_write_frame (s=0x2b39f40, pkt=0x7fffffffe2d0) at libavformat/mux.c:1243
[...]
In the FFmpeg code base, the allocation of priv_pts is handled by init_pts() for all streams referenced by the context. init_pts() has two call sites:
libavformat/mux.c:496:
if (s->oformat->init && ret) {
if ((ret = init_pts(s)) < 0)
return ret;
return AVSTREAM_INIT_IN_INIT_OUTPUT;
}
libavformat/mux.c:530:
if (!s->internal->streams_initialized) {
if ((ret = init_pts(s)) < 0)
goto fail;
}
In both cases, the calls are triggered by avformat_write_header() (indirectly via avformat_init_output() for the first, directly for the second). According to control flow analysis, there's no success case that would leave priv_pts unallocated.
Considering a high probability that our versions of FFmpeg are compatible in terms of behavior, I have to assume that 1) the sample rate must be provided for audio streams and 1) priv_pts is always allocated by avformat_write_header() in the absence of errors. Therefore, two possible root causes come to mind:
Your stream is not an audio stream (unlikely; the type is based on the codec, which in turn is based on the output file extension - assuming mp4)
You do not call avformat_write_header() (unlikely) or do not handle the error in the caller of your C++ member function (the return value of avformat_write_header() is checked but I do not have code corresponding to the caller of the C++ member function; your actual code might differ significantly from the code provided, so it's possible and the only plausible conclusion that can be drawn from available data)
The solution: Ensure that processing does not continue if avformat_write_header() fails. By adding the audio stream, avformat_write_header() starts to fail unless you set the stream sample rate. If the error is ignored, av_interleaved_write_frame() triggers a segmentation fault by accessing the unallocated st->internal->priv_pts.
As mentioned initially, scenario is incomplete. If you do call avformat_write_header() and stop processing in case of an error (meaning you do not call av_interleaved_write_frame()), more information is needed. As it stands now, that is unlikely. For further analysis, the executable output (stdout, stderr) is required to see your traces and FFmpeg log messages. If that does not reveal new information, a self-contained minimal working example and the video input are needed to get all the full picture.

cudaMemcpy throws InvalidValue error when copying from device to host

I've been trying to implement a one dimensional FFT using cuFFT. An InvalidValue error is thrown and no meaningful results are produced.
I've tried to ensure that each error is caught, and I believe that the cudaMemcpy from DeviceToHost causes the issue, though I am not sure why, nor how to fix it. The data size parameter in cudaMemcpy follows the same relation as supplied by the cuFFT documentation.
#include <iostream>
#include <fstream>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <cuda_runtime_api.h>
#include <cufft.h>
// cuda macros
#define NX 100 // number of points
#define BATCH 1 // number of ffts to perform
#define RANK 1 //
#define IDIST 1 // distance between 1st elements of batches
#define ISTRIDE 1 // do every ISTRIDEth index
#define ODIST 1 // distance between 1st elements of output
#define OSTRIDE 1 // distance between output elements
void fft1d() {
// create plan for performing fft
cufftHandle plan;
if (cufftPlan1d(&plan, NX, CUFFT_R2C, BATCH) != CUFFT_SUCCESS) {
printf("Failed to create 1D plan\n");
return;
}
// assemble data
double temp_data[] = {2.598076211353316, 3.2402830701637395, 3.8494572900049224, 4.419388724529261, 4.944267282795252, 5.41874215947433, 5.837976382931011, 6.197696125093141, 6.494234270429254, 6.724567799874842, 6.886348608602047, 6.97792744346504, 6.998370716093996, 6.9474700202387565, 6.8257442563389, 6.6344343416615565, 6.37549055993378, 6.051552679431957, 5.665923042211819, 5.222532898817316, 4.725902331664744, 4.181094175657916, 3.5936624057845576, 2.9695955178498603, 2.315255479544737, 1.6373128742041732, 0.9426788984240022, 0.23843490677753865, -0.46823977812093664, -1.1701410542749289, -1.8601134815746807, -2.531123226988873, -3.176329770049035, -3.7891556376344524, -4.363353457155562, -4.893069644570959, -5.3729040779788875, -5.797965148448726, -6.163919626883915, -6.467036838555256, -6.704226694973039, -6.873071195387157, -6.971849076777267, -6.999553361041935, -6.955901620504255, -6.84133885708361, -6.657032965782207, -6.404862828733319, -6.0873991611848375, -5.707878304681281, -5.270169234606201, -4.778734118422206, -4.23858282669252, -3.6552218606153755, -3.0345982167228436, -2.383038761007964, -1.707185730522749, -1.0139290199674, -0.31033594356630245, 0.39642081173600463, 1.0991363072871054, 1.7906468025248339, 2.463902784786862, 3.1120408346390414, 3.728453594100783, 4.306857124485735, 4.841354967187034, 5.326498254347925, 5.757341256627454, 6.129491801786784, 6.439156050110601, 6.683177170206378, 6.859067520906216, 6.965034011197066, 6.999996379650895, 6.963598207007518, 6.85621054964381, 6.678928156888352, 6.433558310743566, 6.122602401787424, 5.749230429076629, 5.317248684008804, 4.831060947586139, 4.295623596650021, 3.7163950767501706, 3.0992802567403803, 2.4505702323708074, 1.7768781925409076, 1.0850720020162676, 0.3822041878858906, -0.3245599564963766, -1.0280154171511335, -1.7209909100394047, -2.3964219877733033, -3.0474230571943477, -3.667357573646071, -4.249905696354359, -4.78912871521179, -5.279529592175676, -5.716109000098287};
cufftReal *idata;
cudaMalloc((void**) &idata, sizeof(cufftComplex)*NX);
if (cudaGetLastError() != cudaSuccess) {
printf("Failed to allocate memory space for input data.\n");
return;
}
cudaMemcpy(idata, temp_data, sizeof(temp_data)/sizeof(double), cudaMemcpyHostToDevice);
if (cudaGetLastError() != cudaSuccess) {
printf("Failed to load time data to memory.\n");
return;
}
// prepare memory for return data
cufftComplex *odata;
cudaMalloc((void**) &odata, sizeof(cufftComplex)*(NX/2 + 1));
if (cudaGetLastError() != cudaSuccess) {
printf("Failed to allocate memory for output data.\n");
}
// perform fft
if (cufftExecR2C(plan, idata, odata) != CUFFT_SUCCESS) {
printf("Failed to perform fft.\n");
return;
}
I think the error is thrown here, at the cudaMemcpy.
// grab data from graphics and print (memcpy waits until complete) cuda memcopy doesn't complete
// can return errors from previous cuda calls if they haven't been caught
cufftComplex *out_temp_data;
size_t num_bytes = (NX/2 + 1)*sizeof(cufftComplex);
cudaMemcpy(out_temp_data, odata, num_bytes, cudaMemcpyDeviceToHost);
int error_value = cudaGetLastError();
printf("cudaMemcpy from device state: %i\n", error_value);
if(error_value != cudaSuccess) {
printf("Failed to pull data from device.\n");
return;
}
for (size_t i = 0; i < (NX/2 + 1); i++) {
printf("%lu %f %f\n", i, out_temp_data[i].x, out_temp_data[i].y);
}
// clean up
cufftDestroy(plan);
cudaFree(idata);
}
int main() {
fft1d();
return 0;
}
Memory must be allocated before cudaMemcpy can write the data. Thanks to generic-opto-guy for pointing this out.
In this case:
out_temp_data = new cufftComplex[NX/2 + 1];

Detect USB hardware keylogger

I need to determine is there hardware keylogger that was plugged to PC with USB keyboard. It needs to be done via software method, from user-land. However wiki says that it is impossible to detect HKL using soft, there are several methods exists. The best and I think only one overiew that present in net relating that theme is "Detecting Hardware Keyloggers, by Fabian Mihailowitsch - youtube".
Using this overview I am developing a tool to detect USB hardware keyloggers. The sources for detecting PS/2 keyloggers was already shared by author and available here. So my task is to make it worked for USB only.
As suggested I am using libusb library to interfere with USB devices in system.
So, there are methods I had choosen in order to detect HKL:
Find USB keyboard that bugged by HKL. Note that HKL is usually
invisible from device list in system or returned by libusb.
Detect Keyghost HKL by: Interrupt read from USB HID device, send usb reset (libusb_reset_device), read interrupt again. If data returned on last read is not nulls then keylogger detected. It is described on page 45 of Mihailowitsch's presentation
Time measurement. The idea is measure time of send/receive packets using control transfer for original keyboard for thousands times. In case HKL has been plugged, program will measure time again and then compare the time with the original value. For HKL it have to be much(or not so much) greater.
Algorithm is:
Send an output report to Keyboard(as Control transfer) (HID_REPORT_TYPE_OUTPUT 0x02 )
Wait for ACKed packet
Repeat Loop (10.000 times)
Measure time
Below is my code according to steps of detection.
1. Find USB keyboard
libusb_device * UsbKeyboard::GetSpecifiedDevice(PredicateType pred)
{
if (_usbDevices == nullptr) return nullptr;
int i = 0;
libusb_device *dev = nullptr;
while ((dev = _usbDevices[i++]) != NULL)
{
struct libusb_device_descriptor desc;
int r = libusb_get_device_descriptor(dev, &desc);
if (r >= 0)
{
if (pred(desc))
return dev;
}
}
return nullptr;
}
libusb_device * UsbKeyboard::FindKeyboard()
{
return GetSpecifiedDevice([&](libusb_device_descriptor &desc) {
bool isKeyboard = false;
auto dev_handle = libusb_open_device_with_vid_pid(_context, desc.idVendor, desc.idProduct);
if (dev_handle != nullptr)
{
unsigned char buf[255] = "";
// product description contains 'Keyboard', usually string is 'USB Keyboard'
if (libusb_get_string_descriptor_ascii(dev_handle, desc.iProduct, buf, sizeof(buf)) >= 0)
isKeyboard = strstr((char*)buf, "Keyboard") != nullptr;
libusb_close(dev_handle);
}
return isKeyboard;
});
}
Here we're iterating through all USB devices in system and checks their Product string. In my system this string for keyboard is 'USB keyboard' (obviously).
Is it stable way to detect keyboard through Product string? Is there other ways?
2. Detect Keyghost HKL using Interrupt read
int UsbKeyboard::DetectKeyghost(libusb_device *kbdev)
{
int r, i;
int transferred;
unsigned char answer[PACKET_INT_LEN];
unsigned char question[PACKET_INT_LEN];
for (i = 0; i < PACKET_INT_LEN; i++) question[i] = 0x40 + i;
libusb_device_handle *devh = nullptr;
if ((r = libusb_open(kbdev, &devh)) < 0)
{
ShowError("Error open device", r);
return r;
}
r = libusb_set_configuration(devh, 1);
if (r < 0)
{
ShowError("libusb_set_configuration error ", r);
goto out;
}
printf("Successfully set usb configuration 1\n");
r = libusb_claim_interface(devh, 0);
if (r < 0)
{
ShowError("libusb_claim_interface error ", r);
goto out;
}
r = libusb_interrupt_transfer(devh, 0x81 , answer, PACKET_INT_LEN,
&transferred, TIMEOUT);
if (r < 0)
{
ShowError("Interrupt read error ", r);
goto out;
}
if (transferred < PACKET_INT_LEN)
{
ShowError("Interrupt transfer short read %", r);
goto out;
}
for (i = 0; i < PACKET_INT_LEN; i++) {
if (i % 8 == 0)
printf("\n");
printf("%02x, %02x; ", question[i], answer[i]);
}
printf("\n");
out:
libusb_close(devh);
return 0;
}
I've got such error on libusb_interrupt_transfer:
libusb: error [hid_submit_bulk_transfer] HID transfer failed: [5] Access denied
Interrupt read error - Input/Output Error (LIBUSB_ERROR_IO) (GetLastError() - 1168)
No clue why 'access denied', then IO error, and GetLastError() returns 1168, which means - Element not found (What element?). Looking for help here.
Time measurement. Send output report and wait for ACK packet.
int UsbKeyboard::SendOutputReport(libusb_device *kbdev)
{
const int PACKET_INT_LEN = 1;
int r, i;
unsigned char answer[PACKET_INT_LEN];
unsigned char question[PACKET_INT_LEN];
for (i = 0; i < PACKET_INT_LEN; i++) question[i] = 0x30 + i;
for (i = 1; i < PACKET_INT_LEN; i++) answer[i] = 0;
libusb_device_handle *devh = nullptr;
if ((r = libusb_open(kbdev, &devh)) < 0)
{
ShowError("Error open device", r);
return r;
}
r = libusb_set_configuration(devh, 1);
if (r < 0)
{
ShowError("libusb_set_configuration error ", r);
goto out;
}
printf("Successfully set usb configuration 1\n");
r = libusb_claim_interface(devh, 0);
if (r < 0)
{
ShowError("libusb_claim_interface error ", r);
goto out;
}
printf("Successfully claim interface\n");
r = libusb_control_transfer(devh, CTRL_OUT, HID_SET_REPORT, (HID_REPORT_TYPE_OUTPUT << 8) | 0x00, 0, question, PACKET_INT_LEN, TIMEOUT);
if (r < 0) {
ShowError("Control Out error ", r);
goto out;
}
r = libusb_control_transfer(devh, CTRL_IN, HID_GET_REPORT, (HID_REPORT_TYPE_INPUT << 8) | 0x00, 0, answer, PACKET_INT_LEN, TIMEOUT);
if (r < 0) {
ShowError("Control In error ", r);
goto out;
}
out:
libusb_close(devh);
return 0;
}
Error the same as for read interrupt:
Control Out error - Input/Output Error (LIBUSB_ERROR_IO) (GetLastError() - 1168
)
How to fix please? Also how to wait for ACK packet?
Thank you.
UPDATE:
I've spent a day on searching and debbuging. So currently my problem is only to
send Output report via libusb_control_transfer. The 2nd method with interrupt read is unnecessary to implement because of Windows denies access to read from USB device using ReadFile.
It is only libusb stuff left, here is the code I wanted to make work (from 3rd example):
// sending Output report (LED)
// ...
unsigned char buf[65];
buf[0] = 1; // First byte is report number
buf[1] = 0x80;
r = libusb_control_transfer(devh, CTRL_OUT,
HID_SET_REPORT/*0x9*/, (HID_REPORT_TYPE_OUTPUT/*0x2*/ << 8) | 0x00,
0, buf, (uint16_t)2, 1000);
...
The error I've got:
[ 0.309018] [00001c0c] libusb: debug [_hid_set_report] Failed to Write HID Output Report: [1] Incorrect function
Control Out error - Input/Output Error (LIBUSB_ERROR_IO) (GetLastError() - 1168)
This error occures right after DeviceIoControl call in libusb internals.
What means "Incorrect function" there?

Prevent Objective-C object called from C++ code being released

Here is my situation:
I'm using Audio Queue Services in order to record sound. When the callback function is called (as soon as the buffer is full), I send the buffer content to an objective-C object to process it.
void AQRecorder::MyInputBufferHandler(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
NSLog(#"Callback ! Sending buffer content ...");
aqr->objectiveC_Call([NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataBytesCapacity]);
aqr->mRecordPacket += inNumPackets;
}
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
void AQRecorder::objectiveC_Call(NSData *buffer) {
MyObjCObject *myObj = [[MyObjCObject alloc] init];
[myObj process:buffer];
}
The problem here is that I get an EXC_BAD_ACCESS during my process (from myObj's process method), and after some research I guess that it's related to myObj being released.
MyObjCObject.process performs a for loop from the buffer content, and I get the EXC_BAD_ACCESS error even if I just do a NSLog on the buffer values.
-(void)run:(NSData *)bufferReceived {
NSUInteger bufferSize = [bufferReceived length];
self.buffer = (short *)malloc(bufferSize);
memcpy(self.buffer, [bufferReceived bytes], bufferSize);
for(int i= 0; i < bufferSize; i++) {
NSLog("value: %i", buffer[i]);
}
}
Can you please tell me the way to do this ?
ps: My files have the .mm extension, ARC is enabled on the whole project and the rest of my code seems to works as expected.
Thanks !
You malloc the buffer and cast to a '(short*)' but then you enumerate the buffer using 'bufferSize' (number of bytes). That would mean that the 'for' loop would eventually attempt to read past the end of the buffer potentially resulting in 'EXE_BAD_ACCESS'. That would be because each iteration is moving forward by a 'short' rather than a 'byte'. You should change the loop to something like:
for(int i= 0; i < bufferSize / sizeof(short); i++) {
NSLog("value: %i", buffer[i]);
}
Either that or change the type of the 'buffer' member variable.

OpenSSL: AES CCM 256 bit encryption of large file by blocks: is it possible?

I am working on a task to encrypt large files with AES CCM mode (256-bit key length). Other parameters for encryption are:
tag size: 8 bytes
iv size: 12 bytes
Since we already use OpenSSL 1.0.1c I wanted to use it for this task as well.
The size of the files is not known in advance and they can be very large. That's why I wanted to read them by blocks and encrypt each blocks individually with EVP_EncryptUpdate up to the file size.
Unfortunately the encryption works for me only if the whole file is encrypted at once. I get errors from EVP_EncryptUpdate or strange crashes if I attempt to call it multiple times. I tested the encryption on Windows 7 and Ubuntu Linux with gcc 4.7.2.
I was not able to find and information on OpenSSL site that encrypting the data block by block is not possible (or possible).
Additional references:
http://www.fredriks.se/?p=23
http://incog-izick.blogspot.in/2011/08/using-openssl-aes-gcm.html
Please see the code below that demonstrates what I attempted to achieve. Unfortunately it is failing where indicated in the for loop.
#include <QByteArray>
#include <openssl/evp.h>
// Key in HEX representation
static const char keyHex[] = "d896d105b05aaec8305d5442166d5232e672f8d5c6dfef6f5bf67f056c4cf420";
static const char ivHex[] = "71d90ebb12037f90062d4fdb";
// Test patterns
static const char orig1[] = "Very secret message.";
const int c_tagBytes = 8;
const int c_keyBytes = 256 / 8;
const int c_ivBytes = 12;
bool Encrypt()
{
EVP_CIPHER_CTX *ctx;
ctx = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx);
QByteArray keyArr = QByteArray::fromHex(keyHex);
QByteArray ivArr = QByteArray::fromHex(ivHex);
auto key = reinterpret_cast<const unsigned char*>(keyArr.constData());
auto iv = reinterpret_cast<const unsigned char*>(ivArr.constData());
// Initialize the context with the alg only
bool success = EVP_EncryptInit(ctx, EVP_aes_256_ccm(), nullptr, nullptr);
if (!success) {
printf("EVP_EncryptInit failed.\n");
return success;
}
success = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_IVLEN, c_ivBytes, nullptr);
if (!success) {
printf("EVP_CIPHER_CTX_ctrl(EVP_CTRL_CCM_SET_IVLEN) failed.\n");
return success;
}
success = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_TAG, c_tagBytes, nullptr);
if (!success) {
printf("EVP_CIPHER_CTX_ctrl(EVP_CTRL_CCM_SET_TAG) failed.\n");
return success;
}
success = EVP_EncryptInit(ctx, nullptr, key, iv);
if (!success) {
printf("EVP_EncryptInit failed.\n");
return success;
}
const int bsize = 16;
const int loops = 5;
const int finsize = sizeof(orig1)-1; // Don't encrypt '\0'
// Tell the alg we will encrypt size bytes
// http://www.fredriks.se/?p=23
int outl = 0;
success = EVP_EncryptUpdate(ctx, nullptr, &outl, nullptr, loops*bsize + finsize);
if (!success) {
printf("EVP_EncryptUpdate for size failed.\n");
return success;
}
printf("Set input size. outl: %d\n", outl);
// Additional authentication data (AAD) is not used, but 0 must still be
// passed to the function call:
// http://incog-izick.blogspot.in/2011/08/using-openssl-aes-gcm.html
static const unsigned char aadDummy[] = "dummyaad";
success = EVP_EncryptUpdate(ctx, nullptr, &outl, aadDummy, 0);
if (!success) {
printf("EVP_EncryptUpdate for AAD failed.\n");
return success;
}
printf("Set dummy AAD. outl: %d\n", outl);
const unsigned char *in = reinterpret_cast<const unsigned char*>(orig1);
unsigned char out[1000];
int len;
// Simulate multiple input data blocks (for example reading from file)
for (int i = 0; i < loops; ++i) {
// ** This function fails ***
if (!EVP_EncryptUpdate(ctx, out+outl, &len, in, bsize)) {
printf("DHAesDevice: EVP_EncryptUpdate failed.\n");
return false;
}
outl += len;
}
if (!EVP_EncryptUpdate(ctx, out+outl, &len, in, finsize)) {
printf("DHAesDevice: EVP_EncryptUpdate failed.\n");
return false;
}
outl += len;
int finlen;
// Finish with encryption
if (!EVP_EncryptFinal(ctx, out + outl, &finlen)) {
printf("DHAesDevice: EVP_EncryptFinal failed.\n");
return false;
}
outl += finlen;
// Append the tag to the end of the encrypted output
if (!EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_GET_TAG, c_tagBytes, out + outl)) {
printf("DHAesDevice: EVP_CIPHER_CTX_ctrl failed.\n");
return false;
};
outl += c_tagBytes;
out[outl] = '\0';
EVP_CIPHER_CTX_cleanup(ctx);
EVP_CIPHER_CTX_free(ctx);
QByteArray enc(reinterpret_cast<const char*>(out));
printf("Plain text size: %d\n", loops*bsize + finsize);
printf("Encrypted data size: %d\n", outl);
printf("Encrypted data: %s\n", enc.toBase64().data());
return true;
}
EDIT (Wrong Solution)
The feedback that I received made me think in a different direction and I discovered that EVP_EncryptUpdate for size must be called for each block that it being encrypted, not for the total size of the file. I moved it just before the block is encrypted: like this:
for (int i = 0; i < loops; ++i) {
int buflen;
(void)EVP_EncryptUpdate(m_ctx, nullptr, &buflen, nullptr, bsize);
// Resize the output buffer to buflen here
// ...
// Encrypt into target buffer
(void)EVP_EncryptUpdate(m_ctx, out, &len, in, buflen);
outl += len;
}
AES CCM encryption block by block works this way, but not correctly, because each block is treated as independent message.
EDIT 2
OpenSSL's implementation works properly only if the complete message is encrypted at once.
http://marc.info/?t=136256200100001&r=1&w=1
I decided to use Crypto++ instead.
For AEAD-CCM mode you cannot encrypt data after associated data was feed to the context.
Encrypt all the data, and only after it pass the associated data.
I found some mis-conceptions here
first of all
EVP_EncryptUpdate(ctx, nullptr, &outl
calling this way is to know how much output buffer is needed so you can allocate buffer and second time give the second argument as valid big enough buffer to hold the data.
You are also passing wrong (over written by previous call) values when you actually add the encrypted output.