I saw Fennex attempt to access the camera api of Android within Cocos2dx. But on the project listed, I am not sure how I am going to access the camera and the photogallery. Is there a way to do this in Cocos2dx just as you do with Cocos2d?
Thank you!
This Post says there is no cocos2d-x class to do that but provided this code segments:
The gist of it is, on Android :
- call Java code to start an Intent
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
instance.startActivityForResult(cameraIntent, 31); //31 is an ID to recognize that intent yourself
get the Bitmap returned by the Intent
Bitmap original = (Bitmap) intent.getExtras().get("data");
eventually, scale it for your own needs (especially on lower-end device if you also get images from the library)
save it in the file format you want (I use png)
FileOutputStream stream = new FileOutputStream(path);
/* Write bitmap to file using JPEG or PNG and 80% quality hint for JPEG. */
bitmap.compress(CompressFormat.PNG, 80, stream);
bitmap.recycle(); //ensure the image is freed;
stream.close();
If you want the same for iOS look at this post
Related
I'm using Qt and I can encapsulate the video stream from my Logitech webcam in a QCamera, subclass QVideoAbstractSurface and from there feed it to a video widget (specifically a QCameraViewfinder) without problem. Afaik the video frames never enter into system memory and that's what I want. However I further want to manipulate this video (add some overlays). To do this, for each frame I need to get the handle (QAbstractVideoBuffer::handle()) and use this with e.g. the OpenGL API to add my overlays.
bool MyVideoSurface::present(const QVideoFrame& frame)
{
QVariant h = frame.handle();
// ... Manipulate h using OpenGL
// ... Send frame to a video widget
}
The problem is that when I do this, h is invalid, no matter how I specify the QVideoSurfaceFormat. E.g. earlier at initialization I have
QVideoSurfaceFormat qsvf(surfaceSize, pixelFormat, QAbstractVideoBuffer::GLTextureHandle);
mVideoSurface->start(qsvf); // MyVideoSurface instance
(Relevant docs here and here.) It doesn't matter what handle type I input, none of them are valid (not surprisingly in some cases).
Is there any way for me to access my webcam video frames without mapping? I'm happy with solutions that break any specific assumptions I have about which Qt classes are the relevant ones, but there need to be overlays and it needs to be done on the GPU. Is the ability to access them hardware dependent, i.e. I need to choose a device in advance that will support this?
I am using IMFMediaEngine to playback video streams (Smooth Streaming, HLS) and possibly with PlayReady later.
It works wonderfully using TransferVideoFrame to draw the video onto a texture. But I understand that it is a requirement for PlayReady + DRM + 1080p videos, to use DirectComposition. So I am trying to make this works.
Also another advantage of doing it this way, the video frame rendering is independent from the app so possible stuttering or lag in the app UI won't affect the video playback.
I am able to make it work, but unfortunately the IDCompositionVisual I am using always end up being restricted to 640x480. So the video has to downscale to it. Then if I transform this to scale it up, I get an ugly stretched picture.
I am registering the Visual this way:
pMediaAttributes->SetUnknown(MF_MEDIA_ENGINE_PLAYBACK_VISUAL, m_pDcompVideoVisual);
As documented here:
https://msdn.microsoft.com/en-us/library/windows/desktop/hh162850(v=vs.85).aspx
Trying to create a surface prior and SetContent on the Visual doesn't change anything. It's like the video player override it with it's own surface, 640x480. It would really be nice to stick it to the simple player, and find the real solution to this problem and be able to specify the size of the Visual Surface when I received the MF_MEDIA_ENGINE_EVENT_FORMATCHANGE event.
Because this is an option in IMFMediaEngine to specify a DirectComposition surface, there must be a way to make this work.
Based on the docs I see for IMFMediaEngine, you should be able to handle DRM protected content using IMFMediaEngineProtectedContent->TransferVideoFrame:
“For protected content, call this method instead of the IMFMediaEngine::TransferVideoFrame method.”
Something like this can go in the VideoPlayer::CaptureFrame method of the sample you provided:
// Transfer the frame to the texture
auto pIMFMediaEngineUnknown = reinterpret_cast<IUnknown *>(m_pMediaEngine);
IMFMediaEngineProtectedContent *temp = 0;
pIMFMediaEngineUnknown->QueryInterface(IID_PPV_ARGS(&temp));
DWORD flags = 0;
HRESULT ret = temp->TransferVideoFrame(m_pRenderTarget, &videoRect, &targetRect, &borderColor, &flags);
temp->Release();
//HRESULT ret = m_pMediaEngine->TransferVideoFrame(m_pRenderTarget, &videoRect, &targetRect, &borderColor);
assert(ret == S_OK && "Failed to transfer video frame");
Give this a shot with 1080p protected content.
Hi I have an application made with Borland C++Builder. I am using RAD Studio for it.
In the application there is a TForm with a TAnimate (object for videos) on it. I wanted to know if it is somehow possible to stretch the TAnimate object?
If I change the size of the object:
video->Width = newwidth;
video->Height = newheight;
The video doesn't get stretched but a white border gets added to the video image.
Is there some way to scale the video image?
If someone tells me that it is impossible that would be ok !
Maybe it is possible to convert TAnimate in a scaled TImage.
The autosize property of TAnimate doesn't work.
TAnimate is just a thin wrapper around a Win32 Animation control, which has no option for stretching/scaling video. Even MSDN says:
Note The AVI file, or resource, must not have a sound channel. The capabilities of the animation control are very limited and are subject to change. If you need a control to provide multimedia playback and recording capabilities for your application, you can use the MCIWnd control. For more information, see MCIWnd Window Class.
So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.
I compiled the DirectShow sample player (from the Windows SDK's "Samples\multimedia\directshow\players\dshowplayer" folder).
Everything works well but it renders directly to the screen and the audio goes directly to directsound. I need to be able to grab the data and write out images to BMPs and write out the audio to .wav.
Am I using the wrong sample as a starting point? If not, what is the easiest way to modify the sample so I can get access to the video and audio data?
Thanks!
You can insert a SampleGrabber filter before the renderer, and use the ISampleGrabberCB Interface to access the data. You can still render the video to the screen, and output the audio. If you don't want that, use a NullRenderer instead. See also this example on codeproject.