Clone the request to a different Application in pcf - cloud-foundry

I have a requirement of replicating the same traffic that was received to a production system to a pilot project(same code deployed as a diff app with minimal changes). Not sure how to do it. Can you please help me out.
App1 -> Production Application
App2 -> Pilot Project
I somehow want the same traffic received by the production system to replicate in the pilot project also. (No splitting off traffic between the apps using same route but we are talking about duplication of traffic). Is there a way to do that?

Related

504 Gateway Time-out from google cloud platform, but only sometimes

I'm hosting a CouchBase single node cluster in GCP and a flask backend OpenShift cluster which supports angular frontend. The problem is, when a post is called in my flask by my angular, it is taking too much time to get connected the VM (couchbase) and hence flask has to return a "504 Gateway Time-out". But this happens only sometimes. Sometimes it just works very well with proper speed. Not able to troubleshoot. The total data size is less than 100M, and everything is 100% memory resident in Couchbase. So I guess this is not a problem with Couchbase. Just the connection latency to GCP.
My guess is that the first time your flask backend is trying to connect for the first time to your VM, it's taking more than usual as it needs to establish the connection, authenticate and possibly do other things depending on your use-case.
This is a common problem when hosting your app on App Engine or something similar and the solution there is to use "warm-up requests". This basically spins up the whole connection (and in app engine case the instance) and makes a test connection just so when the desired connection comes, everything is already set up.
So I suggest that you check how warm-up requests work and configure something similar between your flask and VM. So basically a route in flask with the only purpose of establishing a test connections with a test package. This way your next connection will be up to speed with no 504 errors.
try to clear to cache of load balancer in GCP console
I already faced same kind of issue and resolved it using above technique

504 gateway timeout for any requests to Nginx with lot of free resources

We have been maintaining a project internally which has both web and mobile application platform. The backend of the project is developed in Django 1.9 (Python 3.4) and deployed in AWS.
The server stack consists of Nginx, Gunicorn, Django and PostgreSQL. We use Redis based cache server to serve resource intensive heavy queries. Our AWS resources include:
t1.medium EC2 (2 core, 4 GB RAM)
PostgreSQL RDS with one additional read-replica.
Right now Gunicorn is set to create 5 workers (by following the 2*n+1 rule). Load wise, there are like 20-30 mobile users making requests in every minute and there are 5-10 users checking the web panel every hour. So I would say, not very much load.
Now this setup works alright for 80% days. But when something goes wrong (for example, we detect a bug in the live system and we had to switch off the server for maintenance for few hours. In the mean time, the mobile apps have a queue of requests ready in their app. So when we make the backend live, a lot of users hit the system at the same time.), the server stops behaving normally and started responding with 504 gateway timeout error.
Surprisingly every time this happened, we found the server resources (CPU, Memory) to be free by 70-80% and the connection pool in the databases are mostly free.
Any idea where the problem is? How to debug? If you have already faced a similar issue, please share the fix.
Thank you,

jetty 404 error page on hot deployment

I am currently using Jetty 9.1.4 on Windows.
When I deploy the war file without hot deployment config, and then restart the Jetty service. During that 5-10 seconds starting process, all client connections to my Jetty server are waiting for the server to finish loading. Then clients will be able to view the contents.
Now with hot deployment config on, the default Jetty 404 error page shows within that 5-10 second loading interval.
Is there anyway I can make the hot deployment has the same behavior as the complete restart - clients connections will wait instead seeing the 404 error page ?
Unfortunately this does not seem to be possible currently after talking with the Jetty developers on IRC #jetty.
One solution I will try to use are two Jetty instances with a loadbalancing reverse proxy (e.g. nginx) before them and taking one instance down for deployment.
Of course this will instantly lead to new requirements (session persistence/sharing) which need to be handled. So in conclusion: much work to do in the Java world for zero downtime on deployments.
Edit: I will try this, seems like a simple enough solution http://rafaelsteil.com/zero-downtime-deploy-script-for-jetty/ Github: https://github.com/rafaelsteil/jetty-zero-downtime-deploy

Django-celery on multiple computers

I got everything I wanted to do with django-celery working on my development machine. More specifically, the app accepts photo urls which are then turned into tasks that the same machine downloads.
Now what I want to do is put the django code on heroku and the celery tasks on a dedicated computer that will be kept in the office.
I don't know what the next step is though. How do I tell the django app to connect to the office computer? What is the process for setting up the office computer to accepts tasks from the django app? How do I give the local computer login credentials to the django app so that it can connect to the database to update the models?
Ideally, I am looking to put something like this in my setting.py file:
remote_worker = '123.2.4.23:1234'
and on the office computer
tasks = 'photos/tasks.py'
remote_app = 'herokuapp123.com/myapp'
username = 'me'
password = 'pw'
I know there are a lot of questions. Any help or pointers would be appreciated!
This largely depends on what AMQP backend you are using for celery. If you are using the default (RabbitMQ) you will need do one of the following:
Install RabbitMQ on heroku server, expose its port to your business IP through firewall and configure your office computer to connect to it
Install RabbitMQ locally on your business computer and configure celery on Heroku to connect to it
Install RabbitMQ on both sides and bridge them.
Alternatively you can integrate the heroku server in your own business network using a VPN solution and have them directly talk to each other (because after all you probably don't want to transmit AMQP packets bare over the interwebz).
Scenario 1 is probably the easiest to set up as Heroku already provides you the plugin infrastructure to do so. Scenario 2 is probably not what you want as you will have to punch a hole in your business firewall for that. Both scenarios 1 and 2 will have latency and reliability issues as routing AMQP traffic over the internet is not going to be expedient or reliable. You will have dropped messages and celery will keep retrying until it succeeds or reaches the max number of failures. However AMQP was designed to handle network issues, they just may inadvertently affect your performance if that is critical. But then again in that case you should reconsider putting the celery workers on a business desktop.
Scenario C is probably best in terms of reliability but also more difficult to set up. Choose based on your needs.

BizTalkServerIsolatedHost disappeared from one server in multi-server group

Afternoon all,
We have a group of four BizTalk servers: two orchestration hosts and two adapter hosts. We have a number of orchestrations exposed as web services, and for the purposes of this question, it is important to note that these web services are hosted on the adapter servers, and run under the BizTalkServerIsolatedHost host instance.
This morning, we started seeing odd errors on both of the adapter servers when SOAP calls came into the web services, like this:
The Messaging Engine failed to
register the adapter for “SOAP” for
the receive location blahblahblah.
Please verify that the receive
location exists, and that the isolated
adapter runs under an account that has
access to the BizTalk databases.
We restarted IIS on both servers, which fixed the errors on ONE server, but the other server continued to fail. The errors continued after a reboot as well.
After chasing our tails for a while, we eventually discovered that the BizTalkServerIsolatedHost host instance on the still-failing server was gone. Just... gone. These applications have been in production for months. Everything had been working swimmingly through the morning, until this just happened.
I don't want to muddy the waters, because I think the problems are unrelated, but in the interest of providing enough information, this problem exactly coincided with a problem in our load-balancing network hardware. The load balancer, which provides a single URL to consumers, and round-robins between the two adapter servers, just stopped working. This problem has not been resolved, so I don't know what happened, but it certainly made troubleshooting more interesting...
So, I have two questions:
Has anyone seen this before, where a host instance disappears?
We cannot find anything in the event viewer or anywhere else that says the host instance was deleted. Is this logged somewhere?
Thanks,
Jason