How to turn off webcam after ending meeting with amazon chime sdk - amazon-web-services

I've got a video conference website using aws-chime-sdk-js, and I've got a button that stops the current meeting. The problem is that I can't get the webcam to stop showing itself as recording (led light and red icon), even after following these function calls outlined below:
https://aws.github.io/amazon-chime-sdk-js/modules/faqs.html#after-leaving-a-meeting-the-camera-led-is-still-on-indicating-that-the-camera-has-not-been-released-what-could-be-wrong
const stop = async (meetingId) => {
try {
const response = await API.post("chime", "/chime/end", {
body: { meetingId },
});
console.log(response);
// Select no video device (releases any previously selected device)
meetingSession.audioVideo.chooseVideoInputDevice(null);
// Stop local video tile (stops sharing the video tile in the meeting)
meetingSession.audioVideo.stopLocalVideoTile();
meetingSession.audioVideo.stop();
} catch (e) {
console.log(e);
}
};
I've even tried releasing the tracks individually with getUserMedia to no avail. Any ideas how to turn off the webcam?

You could reload the window when your component is unmounting, or right after you end the meeting. The action of reloding the page turns off the cam that was already turned on. For example, using javascript.
window.location.reload();

Related

Can't catch bluetooth headset button click event in swiftui 2.0

I am trying to simply execute code on a click of a bluetooth headset button in a SwiftUI 2.0 app, but after trying many different codes, nothing have worked... Does someone have solved this issue?
Based on apple docs and some answer I found on StackOverflow (https://stackoverflow.com/a/58249502/13207818), I tried this simple code
import SwiftUI
import MediaPlayer
struct ContentView: View {
init() {
MPRemoteCommandCenter.shared().pauseCommand.isEnabled = true
MPRemoteCommandCenter.shared().pauseCommand.addTarget(handler: { (event) in
print("Pause")
return MPRemoteCommandHandlerStatus.success
})
MPRemoteCommandCenter.shared().playCommand.isEnabled = true
MPRemoteCommandCenter.shared().playCommand.addTarget(handler: { (event) in
print("Play")
return MPRemoteCommandHandlerStatus.success
})
MPRemoteCommandCenter.shared().togglePlayPauseCommand.addTarget (handler: { (event: MPRemoteCommandEvent) -> MPRemoteCommandHandlerStatus in
// middle button (toggle/pause) is clicked
print("event:", event.command)
return .success
})
}
var body: some View {
Text("Hello World")
}
}
Of course Enabling Background Audio as per Apple doc
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
Even tried to activate my app audio session:
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: .default, options: [.duckOthers, .allowBluetooth, .allowBluetoothA2DP])
try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation)
print("audioSession is Active")
} catch {
print("audioSession properties weren't set because of an error.")
print(error)
}
But everything failed...
Would someone know what I am doing wrong or would have faced such issue with swiftUI 2.0?
Thank in advance for your support
In general you shouldn’t do actions in the initializers of views. Since they represent the state of the UI, not the actual UI they could be broken down and created again whenever SwiftUI thinks it needs to.
Im not at my pc but You can probably get a Publisher for the pause button which you can bind to a view with onReceive
Finally, I got a solution for my issue.
I don't know how it works really behind but the audio focus wasn't on my app. So I've just played a silent sound for a second and I could play properly with my play/pause button. I know that it's not a proper solution, but it works!
This reminds me of a similar bug on the galaxy s8...
If I find a better one, I'll keep you posted.

How to use two CameraCaptureUI at the same time (UWP / C++)

I am working on a Universal Windows Platform Application (UWP) in which I am using C++ as the main language. I want to read from two cameras at the same time. One camera belongs to the Kinect RGB camera and the other to the Kinect Depth camera. So far I've managed to read from just one using this piece of code:
void SDKTemplate::Scenario4_ReproVideo::Grabar_Click(Platform::Object^
sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
CameraCaptureUI^ dialog = ref new CameraCaptureUI();
dialog->VideoSettings->Format = CameraCaptureUIVideoFormat::Mp4;
Windows::Foundation::Collections::IPropertySet^ appSettings = ApplicationData::Current->LocalSettings->Values;
concurrency::task<StorageFile^>(dialog->CaptureFileAsync(CameraCaptureUIMode::Video)).then([this](StorageFile^ file) {
if (file != nullptr) {
concurrency::task<Streams::IRandomAccessStream^> (file->OpenAsync(FileAccessMode::Read)).then([this] (Streams::IRandomAccessStream^ stream){
CapturedVideo->SetSource(stream, "video/mp4");
logger->Text = "recording";
});
Windows::Foundation::Collections::IPropertySet^ appSettings = ApplicationData::Current->LocalSettings->Values;
appSettings->Insert("CapturedVideo", PropertyValue::CreateString(file->Path));
}
else {
logger->Text = "Something went wrong or was cancelled";
}
});
}
By doing this I can reliably record from one of the cameras. My problem is that I need to record from both cameras at the same time as I need the Depth and RGB to process the video.
I am new to concurrency, is there a way (the simpler the better) to achieve two recordings simultaneously?
In UWP app, we can capture photos and video using the MediaCapture class which provides functionality for capturing photos, audio, and videos from a capture device. See the topic Basic photo, video, and audio capture with MediaCapture.
We can initialize multiple MediaCapture instances then read frame by using MediaFrameReader Class. see the topic Discover and select camera capabilities with camera profiles and Process media frames with MediaFrameReader and also look into the official sample CameraFrames.
Besides, there is a similar thread about UWP multiple camera capture, you can also refer it:
Handle multiple camera capture UWP

Custom CAF receiver how to handle idle statement for displaying splash screen

I'm building a custom Chromecast receiver based on the [CAF SDK][1]. I've tried to set the receiver application in IDLE state mode for handling the splash screen during the 5 minutes after the media has ended...
I tried to use :
var video = document.createElement("video");
video.classList.add('castMediaElement');
video.style.setProperty('--splash-image', 'url("img/logo-mySplash.svg")');
document.body.appendChild(video);
var context = cast.framework.CastReceiverContext.getInstance();
context.setInactivityTimeout(300);
playerManager.addEventListener(cast.framework.events.EventType.ALL,
function (event) {
switch(event.type) {
case 'CLIP_ENDED':
context.setApplicationState('IDLE');
break;`
}
})
When the media ended, the receiver dispatches :
{type: "CLIP_ENDED", currentMediaTime: 2673.986261, endedReason: "END_OF_STREAM"}
{type: "MEDIA_FINISHED", currentMediaTime: 2673.986261, endedReason: "END_OF_STREAM"}
and sends error to debug console :
[ 32.846s] [cast.receiver.MediaManager] Unexpected command, player is in IDLE state so the media session ID is not valid yet
I can't find any documentation about this issue. Thanks anyway for your answers.
The context.setApplicationState simply updates the application's statusText. See: https://developers.google.com/cast/docs/reference/caf_receiver/cast.framework.CastReceiverContext#setApplicationState.
What you probably want to do is add a listener to the playerDataBinder on cast.framework.ui.PlayerDataEventType.STATE_CHANGED event. This way you can make your idle screen visible whenever the state property changes.
Also, in order to get the full advantage of using CAF you probably want to to use the cast-media-player element. Additional information on the benefits of using this element instead of a custom video element can be found here: https://developers.google.com/cast/docs/caf_receiver_features

Google Play Game Services's invite dialog doesn't show any friend

I'm trying to get GPG C++'s invitation system to work correctly for an iOS game.
I tried to call RealtimeMultiplayerManager::ShowPlayerSelectUI() to open a dialog to select friends to be invited. However, the dialog doesn't show any friend, only "Auto-Pick" option. I have several Google test accounts in the same circle, and they are already listed as testers in Google Play Console. The game is still in development.
Does anyone know what wrong?
Thanks,
Here is the code:
m_gameServices->RealTimeMultiplayer().ShowPlayerSelectUI(1, 1, true,
[this](gpg::RealTimeMultiplayerManager::PlayerSelectUIResponse const &
response){
HQRemote::Log("inviting friends %d", response.status);
if (gpg::IsError(response.status))
{
handleError((BaseStatus::StatusCode)response.status);
}
else{
auto config = gpg::RealTimeRoomConfig::Builder()
.PopulateFromPlayerSelectUIResponse(response)
.Create();
createRoom(config);
}
});
Update 21/3/2016:
Friends finally appear on the dialog's UI after 3 days of development. Why there is a delay like this?
Try to check your RealTimeMultiplayerManager::ShowPlayerSelectUI method if you misconfigured something. Here is a sample snippet.
void Engine::InviteFriend() {
service_->RealTimeMultiplayer().ShowPlayerSelectUI(
MIN_PLAYERS, MAX_PLAYERS, true,
[this](gpg::RealTimeMultiplayerManager::PlayerSelectUIResponse const &
response) {
LOGI("inviting friends %d", response.status);
// Your code to handle the users's selection goes here.
});
}
You can also look into this Adding Real-time Multiplayer Support to Your Game documentation to check your configuration.

QMediaRecorder does not record anything

I have a program that records video from a web camera. It shows the camera view in the form. When start button clicked it should start recording video and should be stopped after pressing stop button. Program compiles fine but no video is recorded. Can anyone say what is the wrong with it?
Here is my code.
{
camera = new QCamera(this);
viewFinder = new QCameraViewfinder(this);
camera->setViewfinder(viewFinder);
recorder = new QMediaRecorder(camera,this);
QBoxLayout *layout = new QVBoxLayout;
layout->addWidget(viewFinder);
ui->widget->setLayout(layout);
QVideoEncoderSettings settings = recorder->videoSettings();
settings.setResolution(640,480);
settings.setQuality(QMultimedia::VeryHighQuality);
settings.setFrameRate(30.0);
//settings.setCodec("video/mp4");
recorder->setVideoSettings(settings);
recorder->setContainerFormat("mp4");
camera->setCaptureMode(QCamera::CaptureVideo);
camera->start();
}
void usbrecorder::on_btn_Record_clicked()
{
usbrecorder::startRecording();
}
void usbrecorder::on_btn_Stop_clicked()
{
usbrecorder::stopRecording();
}
void usbrecorder::startRecording()
{
recorder->setOutputLocation(QUrl::fromLocalFile("C:\\Users\\Stranger\\Downloads\\Video\\vidoe_001.mp4"));
recorder->record();
}
void usbrecorder::stopRecording()
{
recorder->stop();
}
This is due to limitations on Windows.
As mentioned in Qt documentation here: https://doc.qt.io/qt-5/qtmultimedia-windows.html#limitations
Video recording is currently not supported. Additionally, the DirectShow plugin does not support any low-level video functionality such as monitoring video frames being played or recorded using QVideoProbe or related classes.
You need to specify an output location:
QMediaRecorder::setOutputLocation(const QUrl& location)
e.g.
setOutputLocation(QUrl("file:///home/user/vid.mp4"));
Try to print the state, status and error message:
qDebug()<<record.state();
qDebug()<<record.status();
qDebug()<<record.error();
and see what it prints. With those messages you can have a clear picture of your problem. Maybe QMediaRecorder cannot access your camera.