Ant media server invalidate token while fetching VOD chunks - unauthorized

I have the ant media server VOD embedded on a different domain. While playing the stream it works fine for the first chunk request, but as soon as you click anywhere on the seek bar and it tries to fetch the required chunk, the one-time token invalidates.
There is a case conflict that I'm not able to understand.
If I play the stream directly in the browser, it works fine every time, which means it is a valid token and works when you request chunks from anywhere in the stream.
If I set the publish token to false in settings, everything works fine in the embedded page, which means it is not a Cors issue as well.
I'm wondering what could be the issue that can lead to such kind of conflict.

Are you trying to play with a non-https page? Ant Media Server one-time token system checks session IDs. If you trying to play with a non-https page, each request can get a different session ID. You need to use HTTPS in your system. Could you please try with HTTPS page?

Related

nginx API cross origin calls not working only from some browsers

TLDR: React app's API calls are returning with status code 200 but without body in response, happens only when accessing the web app from some browsers.
I have a React + Django application deployed using nginx and uwsgi on a single centOS7 VM.
The React app is served by nginx on the domain, and when users log in on the javascript app, REST API requests are made to the same nginx on a sub domain (ie: backend.mydomain.com), for things like validate token and fetch data.
This works on all recent version of Firefox, Chrome, Safari, and Edge. However, some users have complained that they could not log in from their work network. They can visit the site, so obviously the javascript application is served to them, but when they log in, all of the requests come back with status 200, except the response has an empty body. (and the log in requires few pieces of information to be sent back with the log in response to work).
For example, when I log in from where I am, I would get response with status=200, and a json object with few parameters in the body of the response.
But when one of the users showed me the same from their browser, they get Status=200 back, but the Response is empty. They are using the same version of browsers as I have. They tried both Firefox and Chrome with the same behaviours.
After finally getting hold of one of the user to send me some screenshots. I found the problem. In my browser that works with the site, the API calls to the backend had Referrer Policy set to strict-origin-when-cross-origin in the Headers. However on their browser, the same was showing up as no-referrer-when-downgrade.
I had not explicitly set the referrer policy so the browsers were using each of their default values, and it differed between different versions of browsers (https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default)
To fix this, I added add_header 'Referrer-Policy' 'strict-origin-when-cross-origin'; to the nginx.conf file and restarted the server. More details here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
The users who had trouble before can now access the site API resources after clearing cache in their browsers.

Cookies not available in subdomain

I am using Spring Boot and While I am able to set the cookie on the server, it is available in the response on the browser, however it is not submitted in the further request, and reason seems to be a sub-domain issue. I will detail it with exact issue and content
Requesting Page: http://stgapi.py.com
it makes a API call -> http://secure.stgapi.py.com
Cookies is set with
Domain-> .py.com Also tried with .stgapi.py.com
path-> /
This is visible in the browser, however in the subsequent call to
http://secure.stgapi.py.com the cookies are not submitted
and hence re-login is requested and enters an infinite loop of login and failure.
Any help is much appreciated, entire web says this is how it works, not able to make it work.
After much brainstorming, figured out that we need to set
("Access-Control-Allow-Credentials", "true")
And client(User-Agent) should also send withCredrentials:true
Further ("Access-Control-Allow-Origin") should not be set to "*" and only one value(origin) is supported.
Reason: Being the above example is considered to be CORS request and not the way it looks at the very first place.

504 gateway timeout django site with nginx+fastcgi

we added ability for admin users to change server date&time through the portal. Changing the date&time back is working fine, but changing forward(more than fastcgi_read_timeout) is returning '504 gateway timeout' even though server time successfully changed behind the scenes.
Please advice how to handle this?
Thanks.
I had a very similar issue with another project. Maybe it is best to submit the date&time credentials (I assume you would be using NTP servers IPs to do this) through the portal asynchronously via a JavaScript AJAX request. Then, let the server then do its thing with the date&time.
Meanwhile, have the client side JavaScript, continuously probe the server with interval AJAX requests (perhaps every 5 seconds) to get back a response message on the server time. That way, each subsequent AJAX request initiates a new Nginx session and if the first fails/timeouts, then try a second time, if that fails, then try a third time, and so on.
This worked on our system. However, I do not know if your product has login/authentication credentials. If it does, then the user may have to log back in once all set and done because a change in time may also expire their log-in session. I don't think this is such a big deal though because theoretically they should only need to change the date/time once in a while if not just one time only. So it shouldn't have too much of an impact on the user experience.
tags: nginx, NTP, timeout, 504

Cookie handling in subsequent requests in same page

I'm creating my own (multi threaded) ISAPI based website in C++ and I'm trying to implement correct session management.
The problem is that when a new session should be created, the session is created twice when using subsequent requests in the generated web page.
Here's how it works:
- Client requests http://localhost and sends either no cookie or a cookie with an old session ID in it.
- Server looks at the session cookie and feels that it needs to create a new one because it no longer exists: it prepares a header with a cookie in it with a new session ID and sends the complete header to the client (I tracked this with http live headers plugin in firefox and it is correct). It also prepares some data like the page and and stuff like that (not yet body data, as it is still processing data from the database and stuff like that) and sends what it has back to the client.
- Client should have this new session cookie now, and sees the stylesheet link and immediately sends the stylesheet request http://localhost/css to my server. But... he still does this with the old session ID for some reason, not with the newly received one!
- Server sees this request (with again an no longer existing session id), generates another new session and sends the new session id with a cookie along with the stylesheet data.
So the client has received two session id's now and will from now on keep using the second one as the first one is overwritten, but nevertheless the first page has used the wrong session (or actually, the second page has).
You could say that this is not a problem, but when I start using personalized stylesheets, I will have the wrong stylesheet on the first page and as the page will use AJAX to refresh the content (if available), it is possible that the stylesheet is never reloaded unless the client refreshes.
So, is this a problem that is always there when doing this kind of thing? Will the browser always send an old cookie although it has already received a new one but is still processing the page? Is this a problem that, for example PHP, also has?
Note: before all the discussions start about "use php instead" or something: I am rewriting a website that I had first written in PHP, it became popular, had thousands of (real) visitors every hour and started killing my server (the website doesn't make that kind of money that I can throw lots of servers at it). By writing it in C++, requests take 2ms instead of 200ms in PHP... I can optimize everything. By taking my time to develop this ISAPI correctly, it is safely multi-threaded and can be multi-processed, multi-servered. And most of all, I like the challenge.
Added note: It seems that the problem is only there when an old session exists in the cookies, because when I completely clear all cookies from my browser, and a new one is created and sent back to the client, the subsequent stylesheet request immediately uses the given session id. This seems to be some kind of proof that I'm doing something wrong when an old session id is sent... Should an existing cookie be deleted first? How?
Added note: The cookie is written with an expire-date one year ahead.
I have found out what the problem was, I was in the assumption that setting a cookie without specifying a path would result in making the session work on all paths on that domain.
By using http://foo.bar/home as main page and http://foo.bar/home/css as stylesheet, internally translating that url to ?s1=home and ?s1=home&css=y, I was actually using two different paths according to the browser which did not pass the cookie to the css-request.
For some reason, they actually got together afterwards, I don't fully understand why.
Isn't this silly? Will people not often have a http://foo.bar/index.php and a http://foo.bar/css/style.css.php , just because they use subdirectories to keep their structure clean?
If anyone knows of a way to fix it, to make subpaths also work with the same cookies, let me know, but as I understand it from the definition of cookies, they are stuck within a specific path (although, it seems that if you specifically add a path other than /, it will work on subdirectories as well?)

Connecting a desktop application with a website

I made an application using Qt/C++ that reads some values every 5-7 seconds and sends them to a website.
My approach is very simple. I am just reading the values i want to send and then i make an HTTP POST to the website. I also send the username and password to the website.
The problem is that i cannot find out if the request is successful. I mean that if i send the request and server gets it, i will get an HTTP:200 always. For example if the password is not correct, there is no way to know it. It is the way HTTP works.
Now i think i will need some kind of a protocol to take care the communication between the application and the website.
The question is what protocol to use?
If the action performed completes before the response header is sent you have the option of adding a custom status to it. If your website is built on PHP you can call header() to add the custom status of the operation.
header('XAppRequest-Status: complete');
if you can modify the server side script you could do the following
on one end :
You can make the HTTP post request via ajax
and evaluate the result of the ajax request.
On the serve side
On the HTTP request you do your process and if everything goes accordingly you can send data back to the ajax script that called it.
solves your problem .. ?