BitMovin - mpd file throwing 404 in player [closed] - amazon-web-services

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I'm trying to setup live streaming using AWS S3 and BitMovin. Things are pretty much working however, my player cannot find a mpd file. Where does this file come from? The .m3u8 file is being generated and being placed in the S3 Bucket by BitMovin but where is the .mpd suppose to get created. Am I suppose to do anything to generate this file or does BitMovin created it?
I've been working off this tutorial:
https://bitmovin.com/tutorials/dash-hls-live-streaming/
Solution
I had a typo in the stream key.

The MPD (.mpd) gets created as soon as there is a valid RTMP push input incoming to be encoded. The tutorial already provide two examples to create an RTMP push input stream with ffmpeg and Open Broadcaster [1].
If you are using the Open Broadcaster, please make sure that you have configured all mentioned settings within the tutorial and that there is an input source available (see picture below).
Further, please make sure that your S3-bucket is setup with valid CORS settings[2] to enable a HTML5 playback.
[1]
https://bitmovin.com/tutorials/dash-hls-live-streaming/#RTMP_Input_Examples
[2]
https://bitmovin.com/tutorials/mpeg-dash-hls-adaptive-streaming-aws-s3-cloudfront/#Setup_CORS_and_crossdomainxml_on_S3
Best,
Gernot

Related

Having trouble understanding how to host a static website on AWS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Sorry if this isn't the right place for this but I can't find the answer to this anywhere. I've never used aws before and I want to use it to host a static website that I built. I bought a domain name and followed the documentation using their example index.html document and have everything working. Now I'm trying to upload the documents for my actual document that has a structure like this:
MySite
-assets
--images
-src
--index.html
--style.css
I cannot figure out how to get it to go to my index.html file in the src folder. Please help me
EDIT: Forgot to add that the error I keep getting is: "The IndexDocument Suffix is not well formed"
Hi it will always read from the root of the bucket. You'll want to have index.html in the root not in the src folder.

Facebook app_scoped_user_id throwing 404 [duplicate]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I have an issue now, suddenly all url image from facebook graph API in my database return a default image look likes this:
Example url :
http://graph.facebook.com/{user-id}/picture?type=large
It is a known bug (which could also mean that it will not be possible anymore in the future):
https://developers.facebook.com/bugs/2054375031451090/
You should subscribe to the bugs and wait.
Update: You can make it work by adding an access_token to the API call, but you should only do that server side, of course. An App Access Token should be good enough:
https://graph.facebook.com/<userId>/?fields=picture&type=large&access_token=...
Update 20.04.2018: It seems like picture URLs are working without an Access Token again: <img src="https://graph.facebook.com/[app-scoped-id]/picture" />
Please add access token parameter in the url
https://graph.facebook.com/id/picture?type=large&access_token=faskfjsld
This will work for sure.
There is an update here:
https://developers.facebook.com/bugs/2054375031451090/
I just tried and it works, by simply appending your access token to the URL link.
So this:
https://graph.facebook.com/<userId>/?fields=picture&type=large
Should become like this:
https://graph.facebook.com/<userId>/?fields=picture&type=large&access_token=...
Hope it helps!

Any way to install Stata packages offline? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I'm using Stata on a remote desktop that doesn't have access to the Internet, and need to install a package. I want to download it to my hard-drive and manually install it while on the remote desktop, but I don't know where to download packages online. Any help is appreciated.
If you search google for ssc package_name usually a link for ideas.repec.org will come up and you can download all of the files manually from there.
(Estout example: https://ideas.repec.org/c/boc/bocode/s439301.html).
You will have to put these files in a directory that Stata looks for ado files in, you can find these directories using the command sysdir. I would recommend saving them to the personal folder.
Assuming that the question means that you wish to transfer commands available on the SSC from a machine with the internet to a machine without the internet, you could:
1: Copy the file from SSC using the ssc copy command on the PC connected to the internet. See the last example here:
ssc copy whitetst.ado
2: Load the resulting .ado file into your remote desktop (see here for info on where Stata will look for .ado files).

C++ generated csv vs Open Office export [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've finished a little application in c++ that parses a table of ~15k records into a .csv file.
The problem I'm having is that a third-party application that's supposed to use this file as source (Magmi) won't recognize the fields from my generated csv. However, if I simply open the same file with Open Office Calc and export it again as a .csv, it works perfectly fine with no other changes whatsoever.
I initially thought this might be a windows CR/LF issue, so I recompiled the application on linux and checked with notepad++ to make sure there's no surplus CR in there, and there isn't. All the line endings are LF.
Can someone please give me a hint as to what am I missing?
Thanks
It turns out it was a permissions issue that was causing the problem. Since my dev. environment is set on a VM, I was copying the output file into the import folder (never really though to see if the permissions were the cause). The ownership remained with the original user the file originated from, causing it to work when it was exported from Open Office, but failing when I tried to use the original one.
Thanks all.

Things to do on a production server [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We just launched our product recently. It's been great, I couldn't have done it without StackOverflow. You guys have been great. Thank You.
So yeah, getting to the question -
What are the fundamental things that I should take care of in a production environment?
The kind of answers I'm looking for would be for example - run a cron job to take regular backup of your database etc..
Well besides a cronjob for backing up the database (make sure you save the old backup to avoid corruption) theres the user upload directory..
I suppose you can create an image of the upload path and files at any given interval that suits your site's activity.
Also I'd always make sure logfiles are maxed in size limit so you don't end up with 10 miles of text logs crammed in one file..
Security stuff tends to be overwhelming, however basically make sure the directories are chmod/chown properly. Probably want to avoid unwanted ssh access, so either use the iptables to make sure it's hard to get in, or atleast update your password from time to time..
I think this topic could go on for ages :-)
if your are using apache server then
this article ( http://www.devshed.com/c/a/Apache/Server-Limits-for-Apache-Security/ ) will be very usefull to you.
plus
Compress your graphics and use Django compressor application to reduce significant js/css
load ..
http://coderpriyu.blogspot.com/2011/12/django-compressing-cssjs-files-with.html