I'm using AWS Codestar setup and I would like to add a database.config to my .ebextentions folder in my rails project.
If you're wondering why I'm not adding database trough console, the Codestar's pipeline fails at the final ExecuteChangeSet stage for CloudFormation changes and throws a 404 error, I assume CodePipeline looking for the previous instance.
Error Message I've been receiving AWS suggests I edit Elastic Beanstalk directly. Really somewhat lost how I can add a database to my project using Elastic Beanstalk while not breaking Codestars CodePipline ExecuteChangeSet.
You specified the 'AWSEBRDSDBInstance' resource in your configuration to create a database instance,
without the corresponding database security group 'AWSEBRDSDBSecurityGroup'. For a better way to add
and configure a database to your environment, use 'eb create --db' or the Elastic Beanstalk console
instead of using a configuration file.
My .ebextensions/database.config file so far.
Resources:
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 5
DBInstanceClass: db.t2.micro
DBName: phctest
Engine: postgresql
EngineVersion: 10.4
MasterUsername: username
MasterUserPassword: password
I could also make a separate RDS database on it's own I thought about that, but like to leave it for Elastic Beanstalk.
Related
When I create a beanstalk environment using a saved configuration, it works fine but creates a new security group for no reason and attaches it to the instances. I already provide a security group to allow SSH access to the instances from VPC sources.
I followed this thread and tried to restrict this behaviour with the following config inside .ebextentions:
Resources:
AWSEBSecurityGroup: { "CmpFn::Remove" : {} }
AWSEBAutoScalingLaunchConfiguration:
Properties:
SecurityGroups:
- sg-07f419c62e8c4d4ab
Now the creation process gets stuck at:
Creating application version archive "app-210517_181530".
Uploading stage/app-210517_181530.zip to S3. This may take a while.
Upload Complete.
Environment details for: restrict-sg-poc
Application name: stage
Region: ap-south-1
Deployed Version: app-210517_181530
Environment ID: e-pcpmj9mdjb
Platform: arn:aws:elasticbeanstalk:ap-south-1::platform/Tomcat 8.5 with Corretto 11 running on 64bit Amazon Linux 2/4.1.8
Tier: WebServer-Standard-1.0
CNAME: UNKNOWN
Updated: 2021-05-17 12:45:35.701000+00:00
Printing Status:
2021-05-17 12:45:34 INFO createEnvironment is starting.
2021-05-17 12:45:35 INFO Using elasticbeanstalk-ap-south-1-############ as Amazon S3 storage bucket for environment data.
How can I do this properly so that my SG is added to the instances and no new SGs are created.
PS: I am using a shared ALB so SG created for load balancers is not a problem right now.
I Have an Ansible script That runs a Cloudformation script. The problem is The ansible script should run over and over a specified time to create an unlimited amount of servers on AWS just like running a Cloudformation createstack on AWS, but when ran more than once it keeps updating the same resource created. It just changes the Name. I have been tring to fix this for 2 days. I need a way to create a NEW server in aws no matter how many times I run the ansible script. I believe the issue is the instance id. Since it sees one is created it doesn't attempt to create a new one. Here is my Cloudformation code uploaded to s3.
Parameters:
KeyPair:
Type: AWS::EC2::KeyPair::KeyName
Description: Connects to this
Resources:
ec2:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-04681a1dbd79675a5
InstanceType: t2.micro
KeyName: !Ref KeyPair
And This is my Ansible code to run the ec2 server in s3 bucket. Ran like ansible-playbook provision.yml
please help.
- hosts: localhost
tasks:
- name: first Task Provision ec2
cloudformation:
stack_name: 'provisiong-ec2'
state: present
region: us-east-1
disable_rollback: true
template_url: https://s3.amazonaws.com/randombuckets/ansy2-cloudformation.template
template_parameters:
KeyPair: rabbit
It's not creating a new instance because the stack_name has not changed and your CFT only builds one host (which is already built).
Your immediate options are:
Create your instances using AutoScalingGroups (ASG) within CloudFormation. You can pass in the minimum number of hosts (MinSize) as a parameter and ASG will take care of the rest. You'll need to build in some logic to increment the count by one each iteration.
(not advised) Change the stack name every time you run the Ansible playbook
(not advised) Add another host to your CFT every time you want to run Ansible
The ASG part of Beanstalk options does not have it listed: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html?shortFooter=true#command-options-general-autoscalingasg - and neither does the rest of the page.
Right now I'm having to resort to manually editing the termination policy in the ASG page in EC2.
The following works for me. I use .ebextensions folder to configure my apps. Inside I have created a termination.config file containing this:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
TerminationPolicies: [ "OldestInstance" ]
Suddenly I can't add an RDS to my EB environment, not sure why. Here's the full error message:
Unable to retrieve RDS configuration options.
Configuration validation exception: Invalid option value: 'db.t1.micro' (Namespace: 'aws:rds:dbinstance', OptionName: 'DBInstanceClass'): DBInstanceClass db.t1.micro not supported for mysql db
I am not sure if this is due to the default AMI that I am using or something else.
Note that I didn't choose to launch t1.micro RDS instance. Seems like eb is trying to get that but this type has been eliminated from RDS instance class.
Just found this link in the community forum. https://forums.aws.amazon.com/ann.jspa?annID=4840, looks like elastic Beanstalk has not updated cloudformation templates yet.
I think it's resolved now. But as a side note, AWS should not make things like this a community announcement.
I am creating an AWS EMR cluster running Spark using a Cloud Formation template. I am using Cloud Formation because that's how we create reproducible environments for our applications.
When I create the cluster from the web dashboard one of the options is to add a Key Pair. This is necessary in order to access via ssh the nodes of the cluster. http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/EMR_CreateJobFlow.html
I can't see how to do the same when using Cloud Formation templates.
The template structure (see below) doesn't have the same attribute.
Type: "AWS::EMR::Cluster"
Properties:
AdditionalInfo: JSON object
Applications:
- Applications
BootstrapActions:
- Bootstrap Actions
Configurations:
- Configurations
Instances:
JobFlowInstancesConfig
JobFlowRole: String
LogUri: String
Name: String
ReleaseLabel: String
ServiceRole: String
Tags:
- Resource Tag
VisibleToAllUsers: Boolean
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-emr-cluster.html#d0e76479
I had a loook to the attribute JobFlowRole that is a reference to an instance profile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html). Again, no sign of the KeyName.
Did anyone solved this problem before?
Thanks,
Marco
I solved this problem. I was just confused by the lack of naming consistency in Cloud Formation templates.
What is generally referred as KeyName becomes Ec2KeyName under
the JobFlowInstancesConfig.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-emr-cluster-jobflowinstancesconfig.html#cfn-emr-cluster-jobflowinstancesconfig-ec2keyname