I am looking to send image data in binary format from a C++ WebSockets server to the browser and then have the browser to interpret the data to display the image to the end user.
Here is my C++ code to read image and send data -
WebsocketClient& client = (WebsocketClient&) (*pClient);
WebsocketDataMessage response(EchoCommunication);
FILE *img = fopen("test_image.png", "rb");
fseek(img, 0, SEEK_END);
unsigned long filesize = ftell(img);
char *buffer = (char*)malloc(sizeof(char)*filesize);
rewind(img);
while (!feof(img)) {
fread(buffer, 1, sizeof(buffer), img);
response.SetArguments(buffer);
}
client.PushPacket(&response);
In Chrome Network tab I am able to see binary frames being received but having no luck in displaying image in browser. I have doubts if data being sent is valid.
Please let me know where am I going wrong.
Related
I made my own rtmp server using libav and ffmpeg. I receive as input either an flv file or an rtmp streaming "containing" an flv file.
Since I manipulate the flv file and the relative composition time of each frame, I would like to know if there is a way to get this composition time.
I thought that given my AVPacket, I could analyze the raw buffer in order to extract the right information since I know that the flv header is 11 bytes and then in the next 16 bytes I should find the composition time.
But it doesn't work.
This is a rough example of code:
AVPacket pkt;
AVFormatContext *ifmt_ctx
while(true)
{
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, &pkt);
//get the composite time
}
AVPacket needs to be able to represent the data found in all media formats. Some formats (like mp4 and flv) have a decode_time and a composition_time, other (like transport streams) have a decode_time and a presentation_time. To make it easier for the programmer, AVPacket chose one method to store the information and converts when needed. Luckily its an an easy to convert back:
auto cts = pkt.pts - pkt.dts
I am writing client-server system that uses FFMPEG library to parse H.264 stream into NAL units on the server side, then uses channel coding to send them over network to client side, where my application must be able to play video.
The question is how to play received AVPackets (NAL units) in my application as video stream.
I have found this tutorial helpful and used it as base for both server and client side.
Some sample code or resource related to playing video not from file, but from data inside program using FFMPEG library would be very helpful.
I am sure that received information will be sufficient to play video, because I tried to save received data as .h264 or .mp4 file and it can be played by VLC player.
Of what I understand from your question, you have the AVPackets and want to play a video. In reality this is two problems; 1. decoding your packets, and 2. playing the video.
For decoding your packets, with FFmpeg, you should take a look at the documentation for AVPacket, AVCodecContext and avcodec_decode_video2 to get some ideas; the general idea is that you want to do something (just wrote this in the browser, take with a grain of salt) along the lines of:
//the context, set this appropriately based on your video. See the above links for the documentation
AVCodecContext *decoder_context;
std::vector<AVPacket> packets; //assume this has your packets
...
AVFrame *decoded_frame = av_frame_alloc();
int ret = -1;
int got_frame = 0;
for(AVPacket packet : packets)
{
avcodec_get_frame_defaults(frame);
ret = avcodec_decode_video2(decoder_context, decoded_frame, &got_frame, &packet);
if (ret <= 0) {
//had an error decoding the current packet or couldn't decode the packet
break;
}
if(got_frame)
{
//send to whatever video player queue you're using/do whatever with the frame
...
}
got_frame = 0;
av_free_packet(&packet);
}
It's a pretty rough sketch, but that's the general idea for your problem of decoding the AVPackets. As for your problem of playing the video, you have many options, which will likely depend more on your clients. What you're asking is a pretty large problem, I'd advise familiarizing yourself with the FFmpeg documentation and the provided examples at the FFmpeg site. Hope that makes sense
I'm developing an application to send video over RTP to a client that can play only H.263 (1996) and H263+ (1998).
To do this i've encoded the video using libav following these steps: (this is only part of the code)
av_register_all();
avformat_network_init();
Fmt = av_guess_format("rtp", NULL, NULL);
...
st = add_video_stream(FmtCtx, CODEC_ID_H263);
...
avio_open(&FmtCtx->pb, rtp_url, URL_WRONLY)
To finally enter a loop where i encode the video, the problem is that the stream generated by this program is encoded in H.263-2000 (or H.263++) which the other side cannot undertand, even though i use CODEC_ID_H263 or CODEC_ID_H263P in the initialization the same thing happens.
Is it possible to encode in those old H.263 versions using libav? i havent managed to do it not even using ffmpeg commands. The stream is always h.263-2000 (PT=96)
I have a server written in C++, that allow the uploading of files.
But I want to check FIRST, the file size and THEN upload it if it is less than 100 MB, else error.
Is there a function that can do that?
This is my function:
long bytes_read = recv(client_fd, tempBuffer, sizeof(tempBuffer),0);
But I cannot control the var bytes_read while receiving the file, but only after.
This is the problem.
You should to send 4-bytes file length before file data:
int file_fize;
recv(client_fd, &file_fize, sizeof(file_fize),0);
recv(client_fd, buffer, file_fize,0);
You should send the size of the file from a client BEFORE sending the file itself. Check the size and responce to your client if you are ready to receive it or not
I'm developing a call recorder for VoIP audio, the audio is encoded by using a g722 codec in a CISCO environment.
Well, I have extracted the data from the RTPs frames and I have decoded this pcm data as follow:
unsigned int payloadSize = htons(udpHdr->len) - (CONSTANT::UDP_HDR_SIZE + CONSTANT::RTP_HDR_SIZE);
char * payload = (char*)rtpHdr + CONSTANT::RTP_HDR_SIZE;
unsigned short m_payloadType = rtpHdr->pt;
//decode_state is initialize like :g722_decode_init(NULL, 64000, G722_SAMPLE_RATE_8000);
outBuffSize = g722_decode(decode_state, decompressed, (const uint8_t*)payload, payloadSize);
I store in a file this decode data (and all frames of the same flow, equal sscr) and when try to hear the audio, I only hear noise.
I think this problem is for the compressed algorithm used CISCO.
The behaviour of the decoded function is correct.
Any suggestion?