My Meteor app doesnt want to work properly when uploaded to AWS Lightsail.
The problem is - when I update a document in collection the changes are not reflected on the client, and even more strange - the updated variable disappears from the client view.
Everything works well in development on my local machine though. I also had the same app deployed to my office server machine and was running without any issues.
I created simple script to reproduce the problem, see on the vid: https://streamable.com/mnu26j
So adding or removing document reflects without issue on the client side, but when I try to edit a variable (name) - it saves it to the database, but the client doesnt the reflect the change until I refresh the page.
To me it feels like there might be some problem with maybe wrongly configured websocket connection that doesnt work together with AWS services, but I might be totally wrong on that. In the full version of the app the websocket was constantly reconnecting and closing connection when I tried to authenticate the user (in a cycle of few hundred times per minute) and the app was stuck on log in screen.
Maybe someone has already met a similar problem or at least would have some clue where I could look for a cause of it?
Let me know if you want me to provide any more details.
Related
Since I couldn't find the answer after hours of searching, my first attempt was to load a WebView in React Native and point it to a custom CCP page hosted on my own server using Amazon Connect Streams. This is the method which seems well documented, but it is intended for desktops. It works great in Firefox on desktop and mobile (Android). When I try it in Chrome or the React Native WebView, it fails.
Developer tools on the desktop version of Chrome gives this error: Refused to display '<URL>' in a frame because it set 'X-Frame-Options' to 'sameorigin'. Obviously, I don't have control over Amazon's X-Frame-Options but I don't think that's really the main problem. The other error shown in the console is: amazon-connect.js:204 [2020-07-14T22:12:32.258Z] [WARN]: ACK_TIMEOUT occurred, attempting to pop the login page if not already open. WARN # amazon-connect.js:204 amazon-connect.js:205 [2020-07-14T22:12:32.262Z] [ERROR]: ACK_TIMEOUT occurred but we are unable to open the login popup. I don't know what to do about this one.
I've also looked at the AWS.Connect API documentation, but it only seems to allow the customer side of a connection.
Has this been done before? If so, could someone give the broad strokes of it? I want to create an app which can connect to the Amazon CCP and maintain the connection so the agent can get an alert as soon as a chat attempt comes in.
Alternative solutions would be appreciated as well.
Update: There is a library from Amazon called amazon-connect-chatjs which looks promising, but I'm not sure if it will allow for the agent side of the chat session. I'm reading about it now.
I am using django for a REST API at my company and have a few people using the first small part of my app via a Samsung tablet. They connect via WiFi to a angular front-end on Apache, that makes API requests to my django development server.
But every now and again the server just freezes intermittently. The front end would work and you can navigate it, but no API calls are going through. And then when ever I press CTRL+C on the development server console, suddenly all the request go through. Depending on how long someone have struggle, there may be 20 API requests that all go through.
At these moments even when I change something in Django in vs-code, nothing happens server side, but when I press CTRL+C suddenly even ALL the server restarts also go through. So I can see that all the request are standing in line just waiting for the server to wake up and then they are all processed. It also looks to me if this mostly happens with the tablet (and not my desktop), where we are using Chrome.
I read that the server is now multi threaded, so that can not be the problem, and I also do not have anti-virus that can stop the server requests.
I do not really know what else to say, it has been 2 weeks of frustration now and continual searching for answer with no luck.
I suspect that the issue is with you using the dev server, where one request is taking too long. If you already have apache installed you can use mod_wsgi to connect to Django like described here
Django With Apache
I am currently developing an instant messaging feature for my apps (ideally cross platform mobile app/web app), and I am out of ideas to fix my issue.
So far, I have been able to make everything work locally, using a Node.js server with socket.io, django, and redis, following what most tutorials online suggest.
The step I am now at consists in putting all that in the cloud using amazon AWS. My Django server is up and running, I created a new separate Node.js server, and I am using Elasticache to handle the Redis part. I launch the different parts, and no error shows up.
However, whenever I try using my messaging feature on the web, I keep getting an error 500:
handshake error
I then used the console to check the request header, and I observed that the cookies are not in there, contrary to when I am on localhost. I know it is necessary to authorize the handshake, so I guess that's where my error is coming from..
Furthermore, I have also checked that the cookies do exist, they are just not set in the request header.
My question is then: How can I make sure Django or socket client (not sure who's responsible here..) puts the cookies in the header??
One of my ideas was that maybe I am supposed to put everything on the same server, with different ports, instead of 2 separate servers? Documentation on that specific architecture problem is surprisingly scarce, compared to the number of tutorials describing how to make it work on local.
I hope I described the problem accurately enough! :)
Important note: I am using socket.io v0.9.1-1, only one compatible with a titanium mobile app.
Thank you for any help!
All right, so I've made some progress.
The cookie problem came from the fact I was making cross-domain request, adding a few lines enabled CORS, which didn't solve the cookie issue, but allowed me to communicate between servers (basically I set the headers of the response using express. I then passed necessary data in the query, even if not the most secure way to do it, I'm just building an MVP, and it's enough for now.
I haven't been able to make the chat work from my Titanium mobile app, but since I can use a webview to handle it, I will be fine.
Hopefully that will help someone.. If anyone needs me to post some code snippets I will gladly do so upon request!
Cheers
I am always curious about how large-scale live web application updates are done. Since the application is live, that is why it complicates everything -you should not down your service and at the same time you should recover the activity/changes (in database etc.) made on your site to the new version during the update later on.
The first and most natural technique comes to the mind is that redirecting all the requests to some other replicated server, so that you can update original server without shutting-down your service.
I just wonder is there any other smarter techniques to handle updates in a live web service. Please, share your experience and opinions guys!
I am facing the same challenge myself.
What i did was to recreate the web page on another server [lets name it Test server] (with a different domain of course), import the scripts/database from live server and adjust them on the new domain.
Now I am experimenting on the Test server and after i make sure that everything is working OK, i am pushing the changes to the live server.
Unfortunately, i dont know if this is the correct way to do this. You have to be careful but it works.
I have a setup with one authoring site and two remote publishing sites.
If I publish from the /Home/ content tree from authoring that is reflected in all remote targets.
If I publish from any other content tree, say /Quotations/, that is not reflected in any of the remote targets. It is, however, reflected on the authoring machine's "Internet" site so the changes are being published locally.
The log file on the authoring site says that the publish of the Quotations content tree worked correctly and that N number of items were published (varies depending on how much I change and/or if I do a full or incremental publish but N is about what I expect it to be).
I'm feeling I've missed something in configuration but not sure where to look.
Many thanks!
rjsjr
A couple of ideas:
Are the templates and other items needed on the remote targets there to properly store the content? If "Quotations" is using different templates that aren't published onto the remote target then you may be publishing empty content items.
Are the remote targets configured within Sitecore's config files to be the proper databases to be pushing the content?
Time for another approach, could we isolate the problem to being one of the following:
DB server. This would be taking the database for the remote target and running it on another web server to ensure that the DB is doing everything correctly in terms of serving up the data.
Web server. This would be taking the web server that hosts the remote target and pointing it at another server to see that there isn't anything wrong with the web server like a misconfiguration in IIS or something like that.
Connectivity between the two. This is what is left if the DB works with another web server and the web server works with another DB server as each part can be eliminated as the problem being solely in one area.
Or do we know it is that last one that is the ugliest one to try to debug?
Are Home and Quotations siblings of each other? If not, then there may be something above Quotations that is the source of the problem.
That I don't know. I'd be tempted to ask this on the Sitecore forms on their site if you are certified in Sitecore you should be able to access it.