I have Google Maps as my background map and on top of it I have weather radar TileOverlays. The animation of these tiles is not working on all Samsung devices I've tested on (S3, S4, S5) and the Sony Xperia...but it works on all other devices I've tested on (Nexus, HTC, Motorola, and many more). Any ideas off hand before I gather the code logic from multiple classes?
It appears that the getTileUrl I override is not being called with whatever updates are being sent out to phones lately. Anyone heard anything of the sort? This is the overridden method for the abstract method here: http://developer.android.com/reference/com/google/android/gms/maps/model/UrlTileProvider.html#getTileUrl(int, int, int)
This is a quote I received directly from Google:
"The workaround is to uninstall Google Play Services updates by the user, and I'm not aware of a workaround on the app developer side.
This bug was introduced with the last version of GMSCore released to the devices. On the bright side, developers are already working for a fix and we're expecting it to be released today or tomorrow."
Related
I have a html5 based player in my project.
I want to allow my users to cast it to their TVs.
I find myself totally lost.
What is the difference between casting to Chromecast and android TV?
Do I need different implementation/application for each?
Or if I develop for Chromecast it will work for smart TVs as well?
If I don't want to implement a custom receiver, can I only implement a sender application?
Other then apple, are there smart TVs that are not android based?
How is casting implemented with them?
If I have a hybrid application that runs on ios and on android, what sender application do I need? Do I need a different one for ios and android? Or for hybrid I can also build a hybrid sender?
If there are a simple tutorial for dummies' I would love a link
Just to clarify on terminology before diving in, a Cast Sender is the app that is controlling the video (e.g., your app on a mobile device) and a Cast Receiver is what's actually playing the video typically on a larger device such as a TV.
What is the difference between casting to Chromecast and android TV? Do I need different implementation/application for each? Or if I develop for Chromecast it will work for smart TVs as well?
For normal usage, there's no difference in implementation; it's just a case of where the Cast Receiver is running. You can optionally choose to have a native Android TV application as the receiver, which is called Cast Connect, but that is a relatively new feature. Your standard Cast Receiver can run on a variety of hardware and you don't care about the hardware specifics, just the capabilities (e.g., for determining FHD vs 4K).
If I don't want to implement a custom receiver, can I only implement a sender application?
You have to have a Cast Receiver, but there are two options. You can either implement a Styled Web Receiver (sounds like that's what makes sense in your case and isn't much work) or you can implement a custom receiver. Both are covered on the Web Receiver Overview page.
Other then apple, are there smart TVs that are not android based? How is casting implemented with them?
There are a variety of smart TV platforms aside from Android TV and tvOS, including Tizen, WebOS, and others. Your Cast Receiver will work on any of them that support Cast and you don't need to have any custom logic to support them individually.
If I have a hybrid application that runs on ios and on android, what sender application do I need? Do I need a different one for ios and android? Or for hybrid I can also build a hybrid sender?
You need to implement the Android Sender and iOS Sender apps separately.
Regarding Samsung TV, there's a extension lib called SmartView SDK.
With the Samsung Smart View SDK, you can develop mobile apps that can stream multimedia content from mobile devices to Samsung Smart TVs.
You can refer document here:
https://developer.samsung.com/smarttv/design/smart-view-sdk.html
https://developer.samsung.com/smarttv/develop/extension-libraries/smart-view-sdk/download.html
And official demo here:
https://github.com/SamsungDForum/SmartViewSDKCastVideo
Link to the bug report on 'Feedback Hub'
An audio endpoint device, from here on referred to as 'endpoint', is a physical or virtual audio output or input device.
With the Windows 10 April Update 1803 the long overdue 'App volume and device preferences' have been introduced. These settings allow more control over audio stream management as it is now possible to set different endpoints for different applications, no matter whether that particular application comes with an endpoint selection or not.
However, there is an issue where the audio of a program, whose endpoint is non-default, is streamed through the default endpoint (or not at all) after it has been closed and launched again, although the endpoint is displayed correctly in the settings:
As far as I know the issue can be recreated on a Windows 10 machine (version 1803 or higher) with any virtual or physical endpoint and an affected program. I used 'VLC Media Player' in this example (disregarding the fact that it comes with an endpoint selection) as it is well known and widely accessible, which should make it easier to recreate the issue.
What I'm searching for...
... is a programmatically solution to switch between endpoints, which ideally can be launched in form of a script to set the correct endpoint with an application launch.
For my purpose it would be enough to have to adjust the device instance path manually, as the device would be always the same, but I'm not going to complain about a solution which retrieves the device instance path from the registry, too.
Defined endpoints and the device instance path of the device they are using can be retrieved from the subkeys of the key HKEY_USERS\# YOUR SID #\Software\Microsoft\Multimedia\Audio\DefaultEndpoint. I don't know how windows generates the name of the subkeys or where they can be found. If I had to take a wild guess, I'd say these are Application IDs (feel free to correct me if I'm wrong).
The device instance path itself can be found in the Device Manager (under 'Audio inputs and outputs' double click the desired device, navigate to the tab 'Details' and select 'Device instance path' from the 'Property' drop-down menu).
Additionally the entry about Audio Endpoint Devices and Stream Management in the Microsoft Docs might be helpful, but that is way above my head.
A possible but impractical workaround...
... would be, to manually set another endpoint for the application and switch back to desired endpoint at every launch of said application (as shown above).
But not just takes this at least 10 seconds at each and every launch, you might even forget to do this as the audio might just get streamed through the default endpoint *¹.
The alternative to the latter is, that no audio will be streamed at all *² or in some cases it actually works *³.
*¹ e.g.: VLC Media Player, Tom Clancy's Rainbow Six Siege (although the audio will be streamed correctly during the splash screens)
*² e.g.: Call of Duty 4: Modern Warfare, Call of Duty: Modern Warfare 2, Call of Duty: Modern Warfare 3
*³ e.g.: Window Media Player, Microsoft Edge, Firefox
Observations
VLC Media Player comes with an endpoint selection, but so does TeamSpeak 3 and, unlike VLC, it skips the Windows settings completely.
Call of Duty not streaming any audio most likely is connected to the engine as I didn't encounter any other application doing something similar.
Windows Media Player, Microsoft Edge and Firefox are the only programs (I tested so far) which work fine. They have no endpoint selection (I'd know of) and will use the correct endpoint after closing and launching it again. It should be noted, however, that Firefox and Microsoft Edge will show multiple instances in the "App volume and device preferences" when adjusting the endpoint.
Disclaimer
I already tried two 3rd party softwares: 'Audio Router', which didn't work at all and 'CheVolume', which doesn't solve the issue and constantly crashes while doing so.
This question is based on one I asked over at Super User (here), where I didn't get an answer I was able to work with due to my lack of knowledge regarding actual programming (I'm only somewhat familiar with Batch and PowerShell). I'm well aware that neither Stack Overflow nor Super User are script writing services, however, the issue is not being fixed with the Windows 10 October Update 1809 and I see this as a problem which is affecting not just me and with that would be helpful for multiple people after me. Feel free to write a comment or propose an edit if you see this differently.
I'm also not sure whether the tags 'audio-streaming' and 'endpoint' should be used in this context, please propose an edit if they shouldn't or you can think of any better.
Edit - 05/11/18
Using the 3rd party software 'EarTrumpet' I was able to overcome the issue with the 'Call of Duty' games (no audio at all after restarting), however, 'VLC Media Player' would not restart after I assigned a non-default endpoint with 'EarTrumpet' until I closed 'EarTrumpet' again and the issue with 'Tom Clancy's Rainbow Six Siege' remains the same.
Edit - 18/01/19
Added link to a bug report I created on the 'Feedback Hub' 2 month ago.
Edit - 20/01/19
After doing some testing again it should be noted that having 'EarTrumpet' run in the background will keep a non-default endpoint for 'VLC Media Player' across restarts, however, 'VLC Media Player' will only (reliably) restart when the non-default endpoint was set in the 'App volume and device preferences'.
I do not have any solution regarding a programming language to handle such events.
But I can recommend EarTrumpet app to handle this change more quickly https://www.theverge.com/2018/6/13/17457778/eartrumpet-windows-10-audio-app
(Windows store: https://www.microsoft.com/en-us/p/eartrumpet/9nblggh516xp?ranMID=24542&ranEAID=nOD%2FrLJHOac&ranSiteID=nOD_rLJHOac-hUn6PgKuMKwQLdrzRqnPTA&epi=nOD_rLJHOac-hUn6PgKuMKwQLdrzRqnPTA&irgwc=1&OCID=AID681541_aff_7593_1243925&tduid=%28ir__qwqlg6jd0jba3y9hpnbvikaite2xk6kuyv9udtr100%29%287593%29%281243925%29%28nOD_rLJHOac-hUn6PgKuMKwQLdrzRqnPTA%29%28%29&irclickid=_qwqlg6jd0jba3y9hpnbvikaite2xk6kuyv9udtr100&activetab=pivot:overviewtab )
I will update the answer if I find a easy way to script/program a change of output on each app.
Is it possible to control the Sony QX10/100, using the Sony’s Camera Remote API SDK from C++ Windows program?
Thank you for your patience and time...
Yes, that should be possible. See https://developer.sony.com/develop/cameras/ for more details.
I answered your question on this related other question, maybe you would want to check it out ?
Windows compatibility with the Sony Camera Remote API
The API works with any device on any OS and programming language, it's just http calls and camera discovery needs some basic socket listening. Though to make it easy to connect to the camera, the device you're connecting from, should support Wi-Fi Direct : http://en.wikipedia.org/wiki/Wi-Fi_Direct
The problem is that these APIs are incredibly simplistic. No way to transfer/delete images off the memory card, can't take pictures without a memory card, can't add a custom streaming service, (only USTERAM is supported), the device doesn't have an 'always on' mode so you need to physically walk up and turn-on/reset (for cameras where this makes sense like the AS100V). It's like Sony has one guy in the basement working on this.
is there any sample project that shows how to use gamekit without gkpeerpicker? And is there any sample that uses bonjour (but without any internet connection and no wireless router)?
Some informations (for both projects):
only bluetooth (bonjour for the other project)
more than 2 devices (if possible)
server client model (how to make it work)
send/receive data
the server can decide whether is is "visible" for other "potenial" clients
show in a tableview the "discoverd" devices -> if the user clicks on a tableview cell (the name of the device e.g. "Tom's iPod Touch") it is going to pair up but the other user must accept the connection (UIAlertView) and if he accepts they will pair up
show all connected peers (connected to the server) in a uitableview
kick out some peers (only the server can kick other) (this should be easy to implement. just send a special packet to the client with a string that says "KICK YOURSELF" than it will kick itself)
invite other clients (in search)
(don't really need this but would be awesome):
let clients/server move objects (physic objects in box2d (cocos2d)) and than every client should show the exactly same simulation on the screen.
After using google for some hours I think there's no sample project(s) that shows these "features" above. Mayby someone could make one? Please do not say me some theory. I read so much but a sample project (or more and another for the bonjour version) with code commentary would be great!
Edit: I will probably add some bounty on this ;). Currently I can't ;)
Thank you very much for reading :)
cocos2dbeginner
I'm not going to write this for you but I can help with some information to get you started. There are many examples that show some of your features, particularly Apple's witap example.
One thing to note, you can't go from bluetooth to bonjour. bluetooth is the method of communication, bonjour is how you communicate. So you'd have to do bonjour broadcasting on one side and connecting on the other. Bonjour over bluetooth is taken care of in Apple's API, it should "just work". See this for some more info here.
This page from apple not only has gamekit concepts you are looking for but it also tells you the method names you will be using to get your tasks done, even if you don't want to be going through the supplied UI that apple has with GameKit.
Here, here, here, and here are links for more bonjour networking samples. As long as you're not doing complex tasks with sockets, I like this personally, it makes things very simple.
Hope that helps some.
I highly recommend Ray Wenderlichs page.
There is a tutorial for Gamecenter networking that matches some of your requests.
http://www.raywenderlich.com/3276/how-to-make-a-simple-multiplayer-game-with-game-center-tutorial-part-12
According to your alias: There are many more useful tutorials with a high quality. Go to http://www.raywenderlich.com/tutorials for a list.
Apple themselves have demo projects that show wifi connections using GKSession and Bluetooth using GKPeerPickerController. If you wanted a peerPicker and an option for both I think you need to use a peerpicker controller to give the user a choice. Use this code at the point you want the picker to appear, after you have instantiated the picker.
picker.connectionTypesMask = GKPeerPickerConnectionTypeOnline | GKPeerPickerConnectionTypeNearby;
Then if they choose wifi use the code from the GKRocket project in the iOS Sample Code Library. If they choose Bluetooth then use GKTank.
The GKRocket code (using GKSession and tables) is much harder to follow but GKSession automatically uses bluetooth if there is no wifi coverage. Given that you will need most of the code for GKSession in your project to handle wifi I think it is easier to forget about peerPickerController unless you only want bluetooth.
Eventually Apple will surely add the necessary methods and properties to peerPickerController to handle wifi, but for the moment it is GKSession you need.
Hope it helps.
I have a project to scan some QR-code or bar-code with camera on windows mobile. (phone x01t)
Programing in C++ and using DirectShow.
Tired to change focus with IAMCameraControl interface, but return the error like "...request is not supported".
Are there any way else?
Thanks
Most (if not all) Windows Mobile phones I've used so far used custom camera drivers, which means OEMs decide which functionalities to implement/support. IAMCameraControl is most likely not one of them.
However, you might want to look for OEM-specific SDKs. For instance, Samsung provides custom APIs enabling to change such parameters as camera focus or ISO. Maybe such APIs exist for your device.