I am streaming flv video,everything works fine, but client I have added to netStream
is receiving (via onMeatData function) only these parameters
canSeekToEnd,
audiocodecid,
duration,
videocodecid,
If it is not possible to get width and height from metadata how I can get them?
Think it's better not to rely on MetaData. But I don't know, is it possible to access width/height from flashplayer.
You can add missing w/h into flv files with FLVTool2.
Can you actually get the metadata listener to respond? Like, does it show your traces from it?
I ask because some frameworks seem to prohibit
Related
I try to access frames of RTCVideoRenderer without success, can you help me please ?
I noticed that there is a "didCaptureVideoFrame" method in RTCVideoCapturerDelegate, but not in RTCVideoViewDelegate.
I have never done objc, I added a method in RTCVideoViewDelegate to get frames (bellow "didChangeVideoSize"), but it do not get fired, I guess it do not work like that.
I am able to access frames from the remote using Android using the "onFrame" of VideoSink, I thought it would be that easy using ios.
PS: To add the method, I took the framework from the pod and put it in the project, because I noticed that when you modify a pod, changes do not apply.
Here is the line I added :
- (void)videoView:(id<RTCVideoRenderer>)videoView didRenderVideoFrame:(RTCVideoFrame *)frame;
I will now try to compile the library with the changes I want.
EDIT:
I am now compiling the library, I noticed the need to change several files to be able to access frames, it will not be done just by adding 10 lines.
Solved thanks to this : How to get frame data in AppRTC iOS app for video modifications?
I used this line instead (because names changed since) :
#property(atomic, strong) RTCVideoFrame* videoFrame;
I wanted a "onFrame" like VideoSink on Android, but it will be ok for now.
I am using Wowza Engine 4.5.0 and I am trying to change the chunk ID numbering based on incoming packet time, instead of the default sequential number that cause problems when restarting the encoder.
From something like this
...media_w112312312_b1024000_7.ts
...media_w112312312_b1024000_8.ts
to a timestamp notation where the chunks continue even after a restart
I read about the property cupertinoCalculateChunkIDBasedOnTimecode, I follow the instructions in this guide to configure it:
https://www.wowza.com/docs/how-to-configure-apple-hls-packetization-cupertinostreaming#livepropref
but it does not work or I am doing something wrong. Has anyone used the property
cupertinoCalculateChunkIDBasedOnTimecode successfully?
many thanks
I have used 'cupertinoCalculateChunkIDBasedOnTimecode' property and even if I restart my encoder, players are able to pick up the stream and play successfully.
The below page will help you in using it properly
http://thewowza.guru/how-to-set-stream-timecodes-to-absolute-time/
I have the project to develop an application that would allow a computer to 'send' a window to another computer.
In order to do that, I need, of course, to capture the concerned window's output from my program.
Google searches leaded me to no relevant result, neither with libX11 nor libxcb.
I also tried to record screenshots with xwd and import, but as they are quite slow, I'm getting up to 3.5 fps
Any help on how I could do that will be welcome (either using libX11, libxcb, or something else)
By the way, I attempt to use c++ for this program
Thanks for reading,
Edit:
The fps test was made without sending files. It was just like "I took screenshots for 5 minutes, and I got 900 pictures"
I think you will need to record screenshots and compress them before sending over network to make things faster. Also, you would need to decrease the quality of the screenshots (user configurable) to make it faster.
Plus there are different techniques to send only the changes (diff of screenshots) to the other computer.
we use EasyRTC for sending image captures from iPad (we create screenshots "manually" and send them via socket.io) to web browser. On the server we have EasyRTC v.1.0.12 and Socket.IO v.0.9.16. It's hard to say what happened (i've just joined the project and encountered this issue. PM says that it was ok some time ago) but recently we started to notice that some frames are blacked-out. We are debugging this issue for few days and we're running out of ideas. We are not sure where the problem is.
We now that we send correct images from the device. We noticed that it happens only when the image is different from previous one (but not always... it's easier to observe it on the weaker internet connection). When the image is "repeated" (i mean it looks the same but from iPad perspective we create it as new instance) everything is fine.
In attachment you can find info from Chrome network debugger. As you can see in thumbnails or images are ok. These with Size/Content from cache are ok but there with Size 0 and Content > 0 are ones which gives black screens when we want to draw them on the canvas.
Any idea what we're doing wrong? How to debug it? It seems that images are somehow downloaded later than we try to draw them?
Our server is on AWS.
You are trying to send the images up as base 64 encoded jpegs. Basically big text strings. The first thing you should be asking yourself is: is the text string I'm sending up getting to the server, or is it getting truncated? Check the length of the string being sent versus the length of the string being sent, and then check the start and ends of the string.
Finally we found the solution which is pretty easy. It turned out that we was trying to draw an image before it was fully loaded. So what we did was to move drawing code to image.onLoad method and now it works as expected.
img.src = "data:image/png;base64, <img content>";
img.onload = function(){
canvas.drawImage(img);
}
I'm trying to stream (large) files over HTTP into a database. I'm using Tomcat and Jersey as Webframework. I noticed, that if I POST a file to my resource the file is first buffered on disk (in temp\MIME*.tmp} before it is handled in my doPOST method.
This is really an undesired behaviour since it doubles disk I/O and also leads to a somewhat bad UX, because if the browser is already done with uploading the user needs to wait a few minutes (depending on file size of course) until he gets the HTTP response.
I know that it's probably not the best implementation of a large file upload (since you don't even have any resume capabilities) but so are the requirements. :/
So my questions is, if there is any way to disable (disk) buffering for MULTIPART POSTs. Mem buffering is obviously too expensive, but I don't really see the need for disk buffering anyway? (Explain please) How do large sites like YouTube handle this situation? Or is there at least a chance to give the user immediate feedback if the file is sent? (Should be bad, since there could be still something like SQLException)
In case anybody is still interested, I solved the same issue by using the Apache Commons Streaming api
The code example on that page worked just fine for me.
Ok, so after days of reading and trying different stuff I stumbled upon HTTPServletRequest. At first I didn't even want to try since it takes away all the convenience methods from #FormDataParam but since i didn't know what else to do...
Turns out it helped. When I'm using #Context HTTPServletRequest request and request.getInputStream() i don't get disk buffering at all.
Now I just have to figure out how to get to the FormDataContentDisposition without #FormDataParam
Edit:
Ok. MultiPartFormData probably has to buffer on disk to parse the InputStream of the Request. So it seems I have to manually parse it myself, if I want to prevent any buffering :(
Your best bet is to take full control and write your own servlet that just grabs request.getInputStream (or request.getWriter if you are consuming text) and does the streaming itself. Most frameworks make your life "easy" by handling all the upload, temporary storage, etc. for you and often make it difficult to do things like streaming. It's quite easy to grab the stream yourself and do whatever you want.
I'm pretty sure Jersey is writing the files to disk to ensure memory is not flooded. Since you know exactly what you need to do with the incoming data -> stream into the database you probably have to write your own MessageBodyReader and get Jersey to use that to process your incoming multipart data.