I am writing a single page app with React for educational purposes. My React-Router v4 BrowserRouter handles client side routing correctly on CodeSandbox but not locally. In this case, the local server is the webstorm built-in devserver. HashRouter works locally but BrowserRouter does not.
Functioning properly: https://codesandbox.io/s/j71nwp9469
You are likely serving your app on the built-in webserver (localhost:63342), right? Internal web server returns 404 when using 'absolute' URLs (the ones starting with slash) as it serves files from localhost:port/project_name and not from localhost:port. That's why you have to make sure to change all URLs from absolute to the relative ones.
There is no way to set up the internal webserver to use project root as server document root. But you can configure it to use URLs like http://<host name>:<port> where the 'host name' is a name specified in hosts file, like 127.0.0.1 myhostName. See https://youtrack.jetbrains.com/issue/WEB-8988#comment=27-577559.
The solution was to understand how push state routing and the history API works. It is necessary to proxy requests through the index page when serving Single Page Applications that utilize the HTML5 History API.
The Webstorm dev server is not expected to include this feature, therefore the mention of Webstorm in this thread was a mistake.
There are multiple libraries of < 20 lines which do this for us, or it can easily be hand coded.
Related
I have a baffling issue with cookie handling in a Blazor server app (.NET Core 6) using openid (Keycloak). Actually, more than a couple which are may or may not linked. It’s a typical (?) reverse proxy architecture:
A central nginx receives queries for services like Jenkins, JypyterHub, SonarQube, Discourse etc. These are mapped through aliases in internal IPs where the nginx can access them. This nginx intercepts URL like: https://hub.domain.eu
A reverse proxy which resolves to https://dsc.domain.eu. This forwards request to a Blazor app running in Kestrel in port 5001. Both Kestrel and nginx under SSL – required to get the websockets working.
Some required background: the Blazor app is essentially a ‘hub’ where its various razor pages ‘host’ in iframe-like the above mentioned services. How it works: When the user asks for the root path (https://hub.domain.eu) it opens the root page of the Blazor app (/).
The nav menu contains the links to razor pages which contain the iframes for the abovementioned services. For example:
The relative path is intercepted by the ‘central’ nginx which loads Jenkins. Everything is under the same Keycloak OpenID server. Note that everything works fine without the Blazor app.
Scenarios that cause the same problem
Assume the user logins in my app using the login page of Keycloak (NOT the REST API) through redirection. Then proceeds to link and he is indeed logged in as well. The controls in the App change accordingly to indicate that the user is indeed authenticated. If you close the tab and open a new one, the Blazor app will act as if it’s not logged in while the other services (e.g Jenkins) will show the logged in user from before. When you press the Login link, you’ll be greeted with a 502 nginx error. If you clean the cookies from browser (or in private / stealth mode) everything works again. Or of you just log off e.g. from Jenkins.
Assume that the user is now in a service such as Jenkins, SonarQube, etc. if you press F5 now you have two problems: you get a 404 Error but only on SOME services such as Sonarcube but not in others. This is a side problem for another post. The thing is that Blazor app appears not logged in again by pressing Back / Refresh
The critical part of Program.cs looks like the following:
This class handles the login / logoff:
Side notes:
SaveTokens = false still causes large header errors and results in empty token (shown in the above code with the Warning: Token received was null). I’m still able to obtain user details though from httpContext.
No errors show up in the reverse proxy error.log and in Kestrel (all deployed in Linux)
MOST important: if I copy-paste the failed login link (the one that produced the 502 error) to a "clean" browser, it works fine.
There are lots of properties affecting the OpenID connect, it could also be an nginx issue but I’ve run out of ideas the last five days. The nginx config has been accommodated for large headers and websockets.
Any clues as to where I should at least focus my research to track the error??
The 502 error shows an error at NGINX's side. The reverse proxy had proper configuration but as it turned out, not the front one. Once we set the header size to suggested size, everything played out.
I have an Angular app using Electron as the desktop wrapper. And there's a separate Django backend which provides HTTP APIs to the Electron client.
So normally when I call the login API the response header will have a Set-Cookie field containing the sessionId. And I can clearly see that sessionId in Postman, however, I can't see this cookie in my Angular app (Dev tools of Electron).
After some further debugging I noticed a warning sign beside my Set-Cookie in dev tools. It said that the cookie is blocked due to the SameSite being set to Lax. So I found a way to modify the server code to return a None samesite (together with a Secure property; I'm using HTTP):
# settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_SAMESITE = 'None'
which did work (and the warning sign is gone) but the cookie is still not visible.
So what's the problem here? Why can't (and How can) I see that cookie so as to make sure that the login works in the actual client, not just Postman?
(btw, now both ends are being developed in localhost.)
There's no need to worry. A good way to check if it works is to actually make a request that requires login (after the API has been Postman tested) and see if the desired data are returned. If so, you are good to go (especially when the warning is gone).
If the sessionId cookie is saved it should automatically be included in the request. Unless there's something wrong with the cookie's path; but a / path would be fine.
Why is the cookie not visible: it's probably due to the separation of front and back ends. In Electron, the pages are typically some local HTML files, as one common step during configuration is to probably modify loadURL or something like that in main.js, for instance:
mainWindow.loadURL(`file://${__dirname}/dist/your-project/index.html`);
So the "site" you are accessing from Electron can be considered as local filesystem (which has no domain and hence no cookie at all), and you should see an empty file:// entry in dev tools -> application -> storage -> cookie. It doesn't mean a local path containing all cookies of the Electron app. Although your backend may be on the same local machine, you are accessing as http:// instead of file:// so the browser (Electron) will treat it as an actual web server.
Therefore, your cookies should be stored in another entry like http(s)://localhost and you can't see it in Electron. (Note that the same cookie will work in both HTTP and HTTPS)
If you use Chrome instead to test, you may be able to see it in all cookies. In some cases where the frontend and backend are deployed to the same host you may see the cookie in dev tools. But I guess there're always some reasons why you need Electron to create a desktop app (e.g. Python scripts).
Further reading
Using HTTPS
Although moving to HTTPS does not necessarily solve the original problem, it may be worth doing in order to prevent potential problems and get ready for the publish.
In your case, for the backend, you can use django-sslserver as a temporary solution before getting your SSL, but it uses a self-signed certificate and may make your frontend complain.
To fix this, consider adding the following code to the main process:
# const { app } = require('electron');
if (!app.isPackaged) {
app.commandLine.appendSwitch('ignore-certificate-errors');
}
Now it provides a good way to distinguish between development (unpacked) and production (packed) and only disables certificate check in development in order to make the code work.
Assuming that SESSION_COOKIE_SECURE in your config refers to cookie's secure flag, You ll have to set
SESSION_COOKIE_SECURE = False
because if this flag is set to True the browser will allow this cookie to be set only if you are using an https connection.
PS: This is just for your localhost. Hopefully you ll be using an Https connection in other environments.
I followed this tutorial on how to host server side generated sites on Google Cloud Buckets. The issue I am having is that the nuxt app works fine when routing internally, but when I reload a route, the site is redirected to whatever route plus a /index.html at the end.
This causes things to break, as it conflicts with Nuxt routing. I set index to be my entry point ala
gsutil web set -m index.html -e 404.html gs://my-static-assets
but it seems to assume that to be the case always. I don't have this problem using Netlify.
According to the tutorial that you're doing, you're redirected to /route/index.html because of your MainPageSuffix index, this means that when you try to access for example https://example.com/route the service look for the target https:// example.com/route/index.html.
To fix this I suggest that you include an index.html file under the subdirectory in the subdirectory of the route.
I recommend, for further info on this to check this post which has a similar issue.
I am relatively new to javascript, and I got an uploader tool called fineuploader that I was considering to use. However locally (development machine) I got it to work (vb.net), but when I put it on my external server, I noticed that there is a post done in http and directly to the server's domain name (e/g/ mydomain.com), instead of mydomain.com/testproject. The site only allows for https traffic.
Is there an easy way to change this? (so it should point at https://mydomain.com/testproject/FileUpload.aspx
The code used by fineuploader shows a parameter called 'endpoint: '/FileUpload.aspx'
Do I have to make changes in the settings of IIS for this webservice?
If you need to enable CORS (cross-domain requests), Fine Uploader supports this. You should read my blog post on how CORS support is implemented in Fine Uploader and how you can support such requests in your server-side code.
We have installed an ASP.NET web site on a client's server. This site has a web service with a couple of web methods that are called by a Flash object in order to display a news feed. If you browse to their site (ex: www.domain.com), everything's working fine except the flash.
The issue is that when we browse to the .asmx, the header shows that the Host is a subdomain internal to their network (internal.domain.com). Obviously this doesn't resolve to any public IP when browsing from outside of their network. This causes the Flash to fail since the flash object is embedded on a page and is therefore running client side.
I checked the computer name on the server in question, and it doesn't even match "internal.domain.com" - it is something completely different. Where is it getting this information from. It is not coming from IIS, since we have no host headers set up, and the IP for the site is set to (all unassigned).
We either need to force the web service to run against a specific host, or we need to change something on the server so that it resolves to a valid public-facing host name. Any and all help is greatly appreciated!!!!
The solution is to add a host header for www.domain.com
More details here
While you probably did this already, it's always a good first step:
Do a global Find in the source code of both the Flash object and the web service for the string in question.
It sounds like someone may have configured/coded the internal.domain.com string into the Flash object's request. (Host: is a HTTP Request header, not Response header, IIRC.)
Does the Flash object get the web service URL from the C# code? If so, it might be getting the default web service URL that you choose when adding a Web Reference to your project in VS. Therefore it might be pointing to a URL locally to the developer's machine/server which is not recognized on the live server.