How configured basler camera to don't get duplicate image - c++

I have configured the Basler camera (aca1920-40um) which is connected to the USB port, I have duplicate frames when I use PylonViewer software and I store a sequence of still images. What parameters should I change to prevent this from happening?
The parameters that I set after connecting camera to pc are:
Enable acquisition frame rate = active.
fps = 25 (acquisition frame rate); trigger = off; Exposure auto = off; exposure time = 1000 .
In the next step, I took the frame using OpenCV and c++ with a code similar to the following link, which again gives me a duplicate frame.
Convert images from Pylon to Opencv in c++

I had the same problem and contacted Basler customer service about it. The issue you are running into is likely due to how you have the recording options set in PylonViewer.
Go to the Recording Settings and set 'Record a frame every' to 1 and select 'Frame(s)' from the drop-down list.
screenshot of pylonviewer recording settings
This worked for me. It was not at all intuitive that those setting applied to the 'Video', I thought it only related the the 'Sequence of still images' option given the layout of the UI.

Related

OMNeT++ changing channel throughput sliding window default parameters in INET

Using the Measuring Channel Throughput tutorial, I am unable to locate the method to change the interval ([s]) and numValueLimit parameters. When used in the INI file, I get the following error: Entry potentially does not match any parameters.
The tutorial states
Channel throughput is a statistic of transmitter modules, such as the PacketTransmitter in LayeredEthernetPhy. Throughput is measured with a sliding window. By default, the window is 0.1s or 100 packets, whichever comes first. The parameters of the window, such as the window interval, are configurable from the ini file, as module.statistic.parameter. For example:
*.host.eth[0].phyLayer.transmitter.throughput.interval = 0.2s
When I run the tutorial out of the box and allow ~50 packet transmissions, the throughput vector:
ChannelThroughputMeasurementShowcase.destination.eth[0].phyLayer.receiver has only one entry in the throughput:vector which is ~44 Mbps.
From what I can tell, this is an average from multiple measurements based on this sliding window.
What I would like to do is change this sliding window so that I can get more values in the vector based on the described settings.
Have these values been depreciated in this new version of OMNeT/INET?
I'm using INET version: inet-4.4.1-302861f35c along with OMNet++ Version: 6.0, Build id: 220413-71d8fab425

Video stream output via USB/TRRS port using OPENCV

I have a raspberry and I process video from a camera connected via usb on it, I need to display in real time only the processed video directly through the usb/trrs port (not the entire desktop with an opencv window, but the video itself).
In the end, I just need to connect another board and it received a raspberry output at its input as if it were just a camera.
P.S. C++/python implementation doesn't matter.
P.P.S. Wireless transmission is not suitable, it is necessary that the raspberry simulate the output of the usb/ trrs like real camera
Some steps:
Connect the raspberry to the display
ctrl + alt + f1
sudo service lightdm stop
ls /dev/fb* (should be our screen's framebuffer type fb0)
and then work with opencv like this
ret, frame = cap.read()
frame32 = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
fbframe = cv2.resize(frame32, (1920,1080))
with open('/dev/fb0', 'rb+') as buf:
buf.write(fbframe)
thread that helped

Flutter video_player initialization error when initializing multiple videos

Problem I need help with
Assist with optimizing latency in its "Short-Video Feed" and solving intermittent performance bugs. A central feature of my app requires seamless plays 15 to 60 second clips when users use "swipe up" hand gestures similarly to TikTok and Instagram reels. Right now, I have performance bugs (intermittent) such as black screens, delayed loading screens, sometimes long loading, etc.
The bugs may be caused because Flutter is slower than Native iOS. However, our "Short-Video Feed" has lots of bugs whether I use a M3U8 (Mux), or whether I use a MP4 based approach with AWS S3.
If I use the Mux based approach with M3U8, "Short-Video Feed" there is a noticeable few milliseconds black screen for each short-video playback.
If I use the Amazon based approach with MP4, "Short-Video Feed" intermittently loads for a few seconds (sometimes minutes) when there is low bandwidth, and some videos get stuck even when a user returns to a location with faster bandwidth.
Open issue on Flutter
https://github.com/flutter/flutter/issues/25558
Approaches I have tried with no success:
Native Player. I tried to use a native video player for Android/iOS, with MP4 and M3U8, but the UI was still very laggy (because of data transfer between Android/iOS and flutter latenices).
Flutter Player. I tried to use a Flutter video player for Android/iOS, with MP4 and M3U8, but the UI shows a black screen with M3U8 & heavy loading for poor internet connections with MP4.
Approaches I need help to try:
Optimize M3U8 player to minimize the black screen issue. or...
Create MP4 chunks to optimize for poor reception areas (this is what I think TikTok Instagram Reels, and similar applications do based on what I can see).
Has anyone solved this issue?
How about isolating whether these lags are due to network buffering or due to Flutter (or even a device hardware limitation such as memory or GPU)?
Perhaps use a few local MP4 files with identical frame rates and encoding parameters (both video and audio) and see whether the UI lag is reproducible upon swipe-up scrolling?

C++ DirectShow Video and Audio capture - beginning

I have finally managed to drop working with VFW after several problems I have encountered during the application development.
Thanks to StackOverflow, I am now aware that VFW is obsolete and wish to switch to DShow, to let my application work with Vista/W7.
Unfortunately, the work has been made and application has been shipped to the client, but as soon as we realized we have troubles with frame rates on Vista / W7 - we decided to rewrite the video class and use DirectShow to establish a good audio/video capture engine for webcameras.
This will be tricky, as we never coded with DShow, and right now we are looking for few specific examples of how to:
Connect to a selected webcamera
similar to: capDriverConnect
Set camera resolution to 640x480 and RGB24 format ( we need to do RGB24 to YUV420 for each frame )
similar to: capSetVideoFormat / capCaptureSetSetup
Set audio capturing for this webcamera
similar to: capSetAudioFormat
Register two callbacks:
One for video frame ( we will pass frames to video encoder )
similar to: capSetCallbackOnVideoStream
One for wave buffer ( we will pass wave buffer to audio encoder )
similar to: capSetCallbackOnWaveStream
Be able to show a preview window somewhere on parent window
similar to: capPreview
Perform Start/Stop operation when needed
Start - would mean, connect and start capturing audio/video frames
Disconnect - would mean, stop capturing audio video frames
Perform drawing to the actual frame
similar to:
SetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);
// draw something with gdi+
GetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);//set back the frame with data
All of the above was already made with VFW, but as I wrote before we unfortunately need to switch do Direct Show.
Is there anyone who could help us out achieving a class that could rescue us from months of studying Direct Show ?
Your best bet for examples will be the ones from Microsoft.
Your questions are still phrased in terms of VFW so it's hard to answer them as written. For example, in DirectShow you wouldn't register a callback for to encode a video frame. Instead, you'd develop an encoder filter that would receive data from the capture source.
As an alternative, if you're only targeting Vista and later, there is the Microsoft Media Foundation. I have no experience with it so I don't know how the learning curve compares to DirectShow.
I'd suggest you to build a graph on GraphEdit using FFDshow filters.
EditGraph is making a demonstration of building a graph on DirectShow
I don't think you need you build the filter class by your own. After you'll build the graph and you'd be able to watch the video using GraphEdit. Implementing the graph is a very simple task.

How to hook webcam capture?

I'm working on a software that the current version has a custom made device driver of a webcam, and we use this driver with our software, that changes the captures image before displaying it, very similar to YouCam.
Basically, when any application that uses the webcam starts, our driver runs a processing in the frame before showing it.
The problem is that there is always "2" webcams installed, the real one, and our custom driver.
I noticed that YouCam does what we need, which is, to hook some method in any installed webcam that will process each frame before showing it.
Does anyone knows how to do this?
We use VC++.
Thanks
As bkritzer said, OpenCV easily does what you want.
IplImage *image = 0; // OpenCV type
CvCapture *capture = 0; // OpenCV type
// Create capture
capture = cvCaptureFromCAM (0);
assert (capture, "Can't connect webcam");
// Capture images
while (stilCapturing)
{
// Grab image
cvGrabFrame (capture);
// Retrieve image
image = cvRetrieveFrame (capture);
// You can configure refresh time
if (image) cvWaitKey (refreshTime);
// Process your image here
//...
}
You can encapsulate these OpenCV calls into a C++ class and dedicate a specific thread for it -- these will be your driver.
I think that YouCam uses DirectShow transform filter. Is that what you need?
Check out the OpenCV libraries. It has a bunch of tutorial examples and libraries that do exactly what you're asking for. It's a bit tough to install, but I've gotten it to work before.
Well, I think there are 2 key concepts in this question that have been misunderstood:
1) How to hook webcam capture
2) ...any application that uses the webcam...
If I understood right, OpenCV is useful for writing your own complete application, complete meaning that it will open camera and will process images. So it wouldn't satisfy point 2), which I understand as referring to other application (not yours!) opening the camera, and your application processing the images.
Point 1) seems to confirm it, because "hook" is a word usually meaning interception of some other process that are not part of your own application.
So I doubt if this question is answered or not. I am also interested on it.