I have a Node.js app on Elastic Beanstalk running on multiple ec2 instance behind a load balancer(elb).
Cause of the need of my app, i had to activate the session stickiness.
I activated the "AppCookieStickinessPolicy" using my custom cookie "sails.sid" as reference.
The problem is that my app need this cookie to work proprely, but as the moment I activate the session stickness (via Duration-Based Session Stickiness or in my case : Application-Controlled Session Stickiness), the headers going to my server are modified and I lost my custom cookie, who is replaced by the AWSELB (amazon ELB) cookie.
How can I configure the loadbalancer to not replace my cookie?
If I understood well, the AppCookieStickinessPolicies must keep my custom cookie but it's not the case.
I am doing wrong somewhere?
Thanks in advance
Description of my load balancer :
{
"LoadBalancerDescriptions": [
{
"AvailabilityZones": [
"us-east-1b"
],
....
"Policies": {
"AppCookieStickinessPolicies": [
{
"PolicyName": "AWSConsole-AppCookieStickinessPolicy-awseb-e-y-AWSEBLoa-175QRBIZFH0I8-1452531192664",
"CookieName": "sails.sid"
}
],
"LBCookieStickinessPolicies": [
{
"PolicyName": "awseb-elb-stickinesspolicy",
"CookieExpirationPeriod": 0
}
],
"OtherPolicies": []
},
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 80,
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"Protocol": "HTTP"
},
"PolicyNames": [
"AWSConsole-AppCookieStickinessPolicy-awseb-e-y-AWSEBLoa-175QRBIZFH0I8-1452531192664"
]
}
]
....
}
]
}
The sticky session cookie set by the ELB is used to identify what node in the cluster to route request to.
If you are setting a cookie in your application that you need to rely on, then expecting the ELB to use that cookie, it is going to overwrite the value you're setting.
Try simply allowing the ELB to manage the session cookie.
I spent a lot of time trying out the ELB stickiness features and routing requests from the same client to the same machine in a back-end cluster server.
Problem is, it didn't always work 100%, so I had to write a backup procedure using sessions stored in MySQL. But then, I realised I didn't need the ELB stickiness functionality, I could just use the MySQL session system.
It is more complex to write a database session system, and there is an overhead of course in that every http call will inevitably involve a database query. However, if this is query uses a Primary Index, it's not so bad.
The big advantage is that any request can go to any server. OR if one of your servers dies, the next one can handle the job just as well. For truly resilient applications, a database session system is inevitable.
Related
Can someone please give me an insight on this error? Credentials are set via IAM policy. This box is included in auto scaling group, and this is the only one that got the following error.
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the \"key\" and \"secret\" options when creating a client or provide an instantiated Aws\Common\Credentials\CredentialsInterface object"
Logs:
CRITICAL
Phalconry\Mvc\Exceptions\ServerException
Extra
"remoteip": "XX.XX.XX.XX, XX.XX.XX.XX",
"userid": "1357416",
"session": "fcke8khsqe4lfo2lj6kdmrd4l7",
"url": "GET:\/manage-competition\/athlete",
"request_identifier": "xxxxxx5c80516bc11532.74367732",
"server": "companydomain.com",
"client_agent": "Mozilla\/5.0 (Linux; Android 9; SM-G965U) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/72.0.3626.121 Mobile Safari\/537.36",
"instance_ip_address": "xx.xx.xx.xx",
"process_id": 29528,
"file": "\/var\/www\/code_deploy\/cfweb\/releases\/20190306195438\/core\/classes\/phalconry\/mvc\/exceptions\/MvcException.php",
"line": 51,
"class": "Phalconry\\Mvc\\Exceptions\\MvcException",
"function": "dispatch"
}```
Context
```{
"Status Code": 500,
"Reason": "Internal Server Error",
"Details": "Array\n(\n [code] => server_error\n [description] => Uncaught Server Error\n [details] => Request could not be processed. Please contact Support.\n)\n",
"Log": "Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the \"key\" and \"secret\" options when creating a client or provide an instantiated Aws\\Common\\Credentials\\CredentialsInterface object. (Unable to parse response body into JSON: 4)",
"Trace": "#0 [internal function]: Phalconry\\Mvc\\MvcApplication::Phalconry\\Mvc\\{closure}(Object(Phalcon\\Events\\Event), Object(Phalcon\\Mvc\\Dispatcher), Object(Aws\\Common\\Exception\\InstanceProfileCredentialsException))\n#1 [internal function]: Phalcon\\Events\\Manager->fireQueue(Array, Object(Phalcon\\Events\\Event))\n#2 [internal function]: Phalcon\\Events\\Manager->fire('dispatch:before...', Object(Phalcon\\Mvc\\Dispatcher), Object(Aws\\Common\\Exception\\InstanceProfileCredentialsException))\n#3 [internal function]: Phalcon\\Mvc\\Dispatcher->_handleException(Object(Aws\\Common\\Exception\\InstanceProfileCredentialsException))\n#4 [internal function]: Phalcon\\Dispatcher->dispatch()\n#5 \/var\/www\/code_deploy\/cfweb\/releases\/20190306195438\/sites\/games\/lib\/phalcon.php(101): Phalcon\\Mvc\\Application->handle()\n#6 \/var\/www\/code_deploy\/cfweb\/releases\/20190306195438\/sites\/games\/index.php(4): require_once('\/var\/www\/code_d...')\n#7 {main}"
}```
The meta-data of instance-profile credentials is under
http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
If that is failing it might be an issue with the hypervisor/droplet that your server has been spun up on. This endpoint will give you the last time credentials were refreshed.
'http://169.254.169.254/latest/meta-data/identity-credentials/ec2/info'
If other servers with identical AMI's and availability zones aren't having an issue, I would log a support ticket, terminate and move on.
These credentials come from the instance metadata URL (http://169.254.169.254).
The problem that you are likely having (esp since you are running in an ASG) is that when you create an AMI and then launch it in another AZ, the route to the metadata URL is not updated. What you need to do is force cloud-init to run on next boot.
The simple way to do this is to clear out the cloud-init metadata directory:
sudo rm -f /var/lib/cloud/instances/*/sem/config_scripts_user
After you run that command, then shutdown the machine and create an AMI from it. If you use that AMI for your ASG, cloud-init will do a full run on first boot which will update the routes to the instance metadata URL and your IAM creds should work.
Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.
What I have is one platform stack, and possibly multiple web application stacks (each represent one web application). The platform stack deploys an ECS platform that allows hosting multiple web applications, but doesn't actually have any web applications. It's just a platform. Then, each web application stack represents a web application.
One of the HTTPS listeners I have in my platform stack template is this. Basically I have an HTTPS listener on port 443, and will carry one default certificate (by requirement you will need at least one certificate to create https listener):
"BsAlbListenerHttps": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"Certificates": [{
"CertificateArn": {
"Ref": "BsCertificate1"
}
}],
...
"Port": "443",
"Protocol": "HTTPS"
}
},
...
Now, let's say if I want to create a new web application (eg. www.example.com), I deploy the web application stack, specify some parameters, and obviously, I'll have to make a bunch of new resources. But at the same time, I will have to modify the current "BsAlbListenerHttps".
I'm able to import the current listener (using Imports and Exports) into my web application stack. But what I want to do is also add a new certificate for www.example.com to the listener.
I've tried looking around but failed to find any answer.
Does anyone know how to do this? Your help is appreciated. Thank you!
What I do in similar cases, is to use only one certificate for the entire region, and add domains to it as I add apps/listeners that are on different domains. I also do this per environment, so I have a staging cert and a production cert in 2 different templates. But for each you would define a standalone cert stack, called for example, certificate-production.json, but use the stack name as 'certificate' so that regardless of the environment, the stack reference is consistent:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "SSL certificates for production V2",
"Resources" : {
"Certificate" : {
"Type": "AWS::CertificateManager::Certificate",
"Properties": {
"DomainName": "*.example.com",
"SubjectAlternativeNames" : [ "*.example2.com","*.someotherdomain.com" ]
}
}
},
"Outputs": {
"CertificateId" : {
"Value" : {"Ref":"Certificate"},
"Description" : "Certificate ID",
"Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-CertificateId" }}
}
}
}
As you can see by using the SubjectAlternativeNames property, this certificate will serve 3 wild card domains. This way I can update the domains as I add services, and rerun the stack. The dependent listeners are not changed in anyway - they always refer to the single app certificate in the region.
One caveat: When you update a cert in CloudFormation, it will email all host administrtors on the given domain (hostmaster#example.com etc.). Each domain will get a confirmation email, and each email has to be confirmed again. If all the domains are not confirmed in this way, then the stack will fail to create/update.
Using this technique, I can manage SSL for all my apps without any trouble, while making it easy to add new ssl endpoints for new domains.
I create the certificate stack right after the main VPC stack, so all later stacks can refer to the certificate id defined here via an export.
I have been wrestling with this for a couple of days now. I want to deploy Spring Cloud Data Flow Server for Cloud Foundry to my org's enterprise Pivotal Cloud Foundry instance. My problem is forcing all Data Flow Server web requests to TLS/HTTPS. Here is an example of a configuration I've tried to get this working:
# manifest.yml
---
applications:
- name: gdp-dataflow-server
buildpack: java_buildpack_offline
host: dataflow-server
memory: 2G
disk_quota: 2G
instances: 1
path: spring-cloud-dataflow-server-cloudfoundry-1.2.3.RELEASE.jar
env:
SPRING_APPLICATION_NAME: dataflow-server
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.system.x.x.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: my-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: my-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: my-domain.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: user
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: pass
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: dataflow-mq
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack_offline
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: dataflow-db
SPRING_APPLICATION_JSON: |
{
"server": {
"use-forward-headers": true,
"tomcat": {
"remote-ip-header": "x-forwarded-for",
"protocol-header": "x-forwarded-proto"
}
},
"management": {
"context-path": "/management",
"security": {
"enabled": true
}
},
"security": {
"require-ssl": true,
"basic": {
"enabled": true,
"realm": "Data Flow Server"
},
"user": {
"name": "dataflow-admin",
"password": "nimda-wolfatad"
}
}
services:
dataflow-db
dataflow-redis
Despite the security block in SPRING_APPLICATION_JSON, the Data Flow Server's web endpoints are still accessible via insecure HTTP. How can I force all requests to HTTPS? Do I need to customize my own build of the Data Flow Server for Cloud Foundry? I understand that PCF's proxy is terminating SSL/TLS at the load balancer, but configuring the forward headers should induce Spring Security/Tomcat to behave the way I want, should it not? I must be missing something obvious here, because this seems like a common desire that should not be this difficult.
Thank you.
There's nothing out-of-the-box from Spring Boot proper to enable/disable HTTPS and at the same time also intercept and auto-redirect plain HTTP -> HTTPS.
There are several online literatures on how to write a custom Configuration class to accept multiple-connectors in Spring Boot (see example).
Spring Cloud Data Flow (SCDF) is a simple Spring Boot application, so all this applies to the SCDF-server as well.
That said, if you intend to enforce HTTPS all throughout your application interaction, there is a PCF setting [Disable HTTP traffic to HAProxy] that can be applied as a global override in Elastic Runtime - see docs. This consistently applies it to all the applications and it is not just specific to Spring Boot or SCDF. Even Python or Node or other types of apps can be enforced to interact via HTTPS with this setting.
I have one OpsWorks Nodejs Stack. I setup multiple nodejs apps. The problem now is that all nodejs server.js scripts listens on port 80 for amazon life check but the port can be used only by one.
I dont know how to solve this. I have read amazon documentation but could not find the solution. I read that I could try to change deploy recipe variables to set this life check to different port but it didn't work. Any help?
I battled with this issue for a while and eventually found a very simple solution.
The port is set in the deploy cookbook's attributes...
https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/deploy/attributes/deploy.rb
by the line...
default[:deploy][application][:nodejs][:port] = deploy[:ssl_support] ? 443 : 80
you can override this using the stack's custom json, such as:
{
"deploy" : {
"app_name_1": {
"nodejs": {
"port": 80
}
},
"app_name_2": {
"nodejs": {
"port": 3000
}
}
},
"mongodb" : {
...
}
}
Now the monitrc files at /etc/monit.d/node_web_app-.monitrc should reflect their respective ports, and monit should keep them alive!
My solution was to implement life check node service that is listening on port 80. When amazon life check request is made to that service it responds and execute its own logic to check for health of all services. It works great.