I am fairly new to AWS, I wrote a script to create a ElasticBeanstalk server and deploy code to it which works fine
I am able to get the IP Address and Instance ID using
aws ec2 describe-instances
I know a typical HTTP URL looks like this
http://(cname-prefix).(region).elasticbeanstalk.com
and I used it to "generate" the URL in the script
But I want to check if we can get the URL using CLI
The AWS CLI has a command for this, it describes the elastic beanstalk environment and one of the resulting values is the Endpoint URL
For load-balanced, autoscaling environments, it returns the URL to the LoadBalancer. For single-instance environments, the IP address of the instance is returned.
See docs
aws elasticbeanstalk describe-environments --environment-names my-env
The output looks like this:
{
"Environments": [
{
"ApplicationName": "my-app",
"EnvironmentName": "my-env",
"VersionLabel": "7f58-stage-150812_025409",
"Status": "Ready",
"EnvironmentId": "e-rpqsewtp2j",
"EndpointURL": "awseb-e-w-AWSEBLoa-1483140XB0Q4L-109QXY8121.us-west-2.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux 2015.03 v2.0.0 running Tomcat 8 Java 8",
"CNAME": "my-env.elasticbeanstalk.com",
"Health": "Green",
"AbortableOperationInProgress": false,
"Tier": {
"Version": " ",
"Type": "Standard",
"Name": "WebServer"
},
"DateUpdated": "2015-08-12T18:16:55.019Z",
"DateCreated": "2015-08-07T20:48:49.599Z"
}
]
}
Related
I am writing a small python function to know the aws region of our AWS EIPs. When I use the ipinfo services such as ipinfo.io, I do get these details:
~ ➤ curl ipinfo.io/18.138.84.13/json
{
"ip": "18.138.84.13",
"hostname": "ec2-18-138-84-13.ap-southeast-1.compute.amazonaws.com",
"city": "Singapore",
"region": "Singapore",
"country": "SG",
"loc": "1.2897,103.8501",
"org": "AS16509 Amazon.com, Inc.",
"postal": "048508",
"timezone": "Asia/Singapore",
"readme": "https://ipinfo.io/missingauth"
}%
~ ➤
From this information, I get all the details except the region_name. Of course, I can trim the hostname URL to grep the region name i.e. ap-southeast-1 but that is not optimal.
Is there any way in either boto3 or aws cli where I can hit the aws API with params like region=Singapore and in response I get ap-southeast-1.
Or if there is any aws api I can hit directly by giving input as EIP and it return me details including region_name. I need to use region_name further in my script for automating the job.
In AWS, IP addresses are assigned dynamically, they are not bounded by region, The eip pool will automatically you IP address which they have available. I would suggest to ignore the region base on ip address.
I have solved it by this:
hostname.rsplit('.', 4)[1]
This hostname is in the response of ipinfo.io/18.138.84.13/json
I have performed aws configure and ask configure after installing ask-cli.
While setting up new skill using ask new selected NodeJS, AWS with CloudFormation.
Trying to deploy the skill using ask deploy, getting [Error]: CliError: The CloudFormation deploy failed for Alexa region "default": Access Denied.
Tried setting the region in ~/.aws/config and in ~/.aws/credentials, still running into same error.
What should be done to fix the issue?
Skill creation
Error deploying skill
I've been able to deploy.
After running aws configure, I called ask new, and I think the solution was to not select AWS With CloudFormation but AWS Lambda:
I wanted to use an existing skill that I previously created in the web UI. So I created two folders: lambda and skill-package. Then I used ask init saying I don't want to use AWS CloudFormation to deploy:
Next, I added my region in ask-resources.json, under skillInfrastructure:
{
"askcliResourcesVersion": "2020-03-31",
"profiles": {
"default": {
"skillMetadata": {
"src": "./skill-package"
},
"code": {
"default": {
"src": "./lambda"
}
},
"skillInfrastructure": {
"type": "#ask-cli/lambda-deployer",
"userConfig": {
"runtime": "nodejs12.x",
"handler": "index.js",
"awsRegion": "eu-west-1"
}
}
}
}
}
And I finished with ask deploy that worked!
As I searched for the stickiness in Elastic Beanstalk I didn't find the way how to include it using AWS Cloud Formation. Can anyone help me to do that thing.
Thanks in advance.
If we see the definition of sticky sessions, it says "Sticky sessions are a mechanism to route requests to the same target in a target group".
In Elastic Beanstalk, a target group is represented by a process. So we need to set up stickiness at process level using option settings
You can take 2 approaches here : ( below is for "default" process, if you have configured additional processes then modify accordingly but implementation remains same )
option setting namespace : aws:elasticbeanstalk:environment:process:default
Valid options to set : StickinessEnabled, StickinessLBCookieDuration
Specify option settings in your CloudFormation template under the AWS::ElasticBeanstalk::Environment Type, like described.
sample :
"Environment": {
"Properties": {
"ApplicationName": {
"Ref": "Application"
},
"Description": "AWS Elastic Beanstalk Environment running Python Sample Application",
"SolutionStackName": {
"Ref": "SolutionStackName"
},
"VersionLabel": "Initial Version",
"OptionSettings": [
{
"Namespace": "aws:elasticbeanstalk:environment:process:default",
"OptionName": "StickinessEnabled",
"Value":"true"
},
{
"Namespace": "aws:elasticbeanstalk:environment:process:default",
"OptionName": "StickinessLBCookieDuration",
"Value":"43200"
}
]
},
"Type": "AWS::ElasticBeanstalk::Environment"
}
Configure this at the source bundle level, i.e, create a .config file ( say albstickiness.config ) and place it in .ebextensions folder. In the .config file set stickiness for the ALB process.
sample can be found here under the sub heading ".ebextensions/alb-default-process.config"
you can try the below
LBCookieStickinessPolicy:
- PolicyName: myLBPolicy
CookieExpirationPeriod: '180'
you can read more about the sticky session here and here
When you have a certificate for your domain issued through AWS Certificate Manager, how do you apply that certificate to an Elastic Beanstalk application.
Yes, the Elastic Beanstalk application is load balanced and does have an ELB associated with it.
I know I can apply it directly to the ELB my self. But I want to apply it through Elastic Beanstalk so the env configuration is saved onto the Cloud Formation template.
I found out, you cannot do it through the elastic beanstalk console (at least not yet). However you can still set it via the eb cli, or aws cli.
Using EB CLI
Basically what we are trying to do is to update the aws:elb:listener setting, you can see the possible settings in the general options docs.
Using the EB CLI is pretty simple. Assuming we already setup the awsebcli tool for our project we can use the eb config command.
It will open up your default terminal editor and allow you to change settings which are written as a YAML file. When you make a change and save it, the eb config cmd will automatically update the settings for your Elastic Beanstalk environment.
You will need to add the following settings to your config file:
aws:elb:listener:443:
InstancePort: '80'
InstanceProtocol: HTTP
ListenerEnabled: 'true'
ListenerProtocol: HTTPS
PolicyNames: null
SSLCertificateId: CERTIFICATE_ARN_HERE
Change the value for CERTIFICATE_ARN_HERE to your AMC Certificates ARN. You can find it in the AWS Certificate Manager console:
IMPORTANT: Your aws:elb:listener:443 setting MUST be placed above the aws:elb:listener:80 setting. Otherwise the environment configuration update will error out.
Using AWS CLI
The same can be accomplished using the general aws cli tools via the update-environment command.
aws elasticbeanstalk update-environment \
--environment-name APPLICATION_ENV --option-settings \
Namespace=aws:elb:listener:443,OptionName=InstancePort,Value=80 \
Namespace=aws:elb:listener:443,OptionName=InstanceProtocol,Value=HTTP \
Namespace=aws:elb:listener:443,OptionName=ListenerProtocol,Value=HTTPS \
Namespace=aws:elb:listener:443,OptionName=SSLCertificateId,Value=CERTIFICATE_ARN_HERE
NOTE: When you update it via either of the methods above, the Elastic Beanstalk console will not show HTTPS as enabled. But the load balancer will, and it will also apply to the Cloudformation template as well get saved into the EB's configuration.
I find the simplest way is change the EB Load Balancer via the user console. Click change and select the new ACM certificate.
When you view the EB configuration, it will not appear, but it will be set
You can do this purely with CloudFormation; however, as seems to be quite common with Elastic Beanstalk the configuration options are much harder to find in the docs than they are for the individual components that comprise Elastic Beanstalk. The info is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbloadbalancer
But basically what you need to do is add the creation of the cert to your template and then reference it in OptionSettings in AWS::ElasticBeanstalk::ConfigurationTemplate:
"Certificate" : {
"Type": "AWS::CertificateManager::Certificate",
"Properties": {
"DomainName": "example.com",
}
},
// ...
"ElasticbeanstalkTemplate": {
"Type": "AWS::ElasticBeanstalk::ConfigurationTemplate",
"Properties": {
"SolutionStackName": "MyEBStack",
"ApplicationName": "MyAppName",
"Description": "",
"OptionSettings": [{
"Namespace": "aws:elb:listener:443",
"OptionName": "InstancePort",
"Value": "80"
}, {
"Namespace": "aws:elb:listener:443",
"OptionName": "InstanceProtocol",
"Value": "HTTP"
}, {
"Namespace": "aws:elb:listener:443",
"OptionName": "ListenerProtocol",
"Value": "HTTPS"
}, {
"Namespace": "aws:elb:listener:443",
"OptionName": "SSLCertificateId",
"Value": {
"Ref": "Certificate"
}
}, /*More settings*/]
Check in which zone you created the certificate and if it matches the Elastic Beanstalk zone. I had them in different zones so it didn't work.
Previously I used a single docker container elastic beanstalk environment. It was able to use my login credentials that are stored on S3 to download a container from a private docker hub repository.
However, I created a new multi container docker environment and since then I always get the error:
change="{TaskArn:arn:aws:ecs:eu-west-1:188125317072:task/dbf02781-8140-422a-9b81-93d83441747d
ContainerName:aws-first-test Status:4
Reason:CannotPullContainerError:
Error: image test/awstest:latest not found ExitCode:<nil> PortBindings:[] SentStatus:NONE}"
(I'm using exactly the same container that worked before)
The container does exist and the environment is in the same location as the login credentials (ireland)
My Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"Bucket": "docker-ireland",
"Key": ".dockercfg"
},
"containerDefinitions": [
{
"name": "aws-first-test",
"image": "test/awstest",
"memory": 250
},
{
"name": "aws-second-test",
"image": "test/awstest",
"memory": 250
}
]
}
The Dockerrun.aws.json is case sensitive and in version 2.0 the keys authentication, bucket and key are changed to lower case.
This answer is from the amazon aws forums: https://forums.aws.amazon.com/message.jspa?messageID=667098
In my case this error was caused because I had something like the following in my S3 config file:
{
"server" :
{
"auth" : "*****",
"email" : "*****"
}
}
Not kidding, i had the keyword "server" instead of registry url service (https://index.docker.io/v1/ for docker).
I must´ve copied from some blog or documentation idk. Feeling dump already.