I created an instance in AWS Beanstalk and use it with git repository.
There is two files outside this repository, config.php and .htaccess.
I could create them with vim, inside the instance via ssh, but when I upload a new version they are erased.
What is the correct way to work with files outside the repository, like db connection and custom configurations?
The idea behind Elastic Beanstalk (and other application PaaS's as this isn't unique to Elastic Beanstalk) is that the server the application runs on is in essence stateless. This means that any local changes that you make to the instance will be gone if that instance is replaced.
This can be the case when using AutoScaling Groups that cause instances to be terminated and created based on demand. This can also happen if your instance has issues and is deemed in a bad state.
Thus if you SSH into an EC2 instance, create files, and then push a new version of your application, your instance is tore down, brought back up and the files aren't there anymore.
If you want to persist information that isn't in version control (often application secrets like API keys, credentials, specific configuration, etc.), then one way to do that is to add it to environment variables which you can learn about here: http://docs.aws.amazon.com/gettingstarted/latest/deploy/envvar.html
#Josh Davis is correct in saying "The idea behind Elastic Beanstalk (and other application PaaS's as this isn't unique to Elastic Beanstalk) is that the server the application runs on is in essence stateless. This means that any local changes that you make to the instance will be gone if that instance is replaced."
In layman's terms, that means that the server can be rebuilt at any time and any data that was persisted to disk is lost.
If you'd like to persist the above two files without version control then I would suggest using ebextensions >> http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
Example:
files:
"/home/ec2-user/myfile" :
mode: "000755"
owner: root
group: root
source: http://foo.bar/myfile
"/home/ec2-user/myfile2" :
mode: "000755"
owner: root
group: root
content: |
# this is my file
# with content
The first example will download a file from http://foo.bar/myfile and create that file and it's contents on the file system in this location /home/ec2-user/myfile
The second example will create a file with the contents you specify; in this case the file will have # this is my file # with content.
If you use the second option always run is through a YAML validator >> http://www.yamllint.com/
Related
Ok, so I've got a web application written in .NET Core which I've deployed to the AWS Elastic beanstalk which was pretty easy, but I've already hit a snag.
The application fetches JSON data from an external source and writes to a local file, currently to wwwroot/data/data.json under the project root. Once deployed to AWS, this functionality is throwing an access denied exception when it tries to write the file.
I've seen something about creating a folder called .ebextensions with a config file with some container commands to set permissions to certain paths/files after deployment, and I've tried doing that, but that does not seem to do anything for me - I don't even know if those commands are even being executed so I have no idea what's happening, if anything.
This is the config file I created under the .ebextensions folder:
{
"container_commands": {
"01-aclchange": {
"command": "icacls \"C:/inetpub/AspNetCoreWebApps/app/wwwroot/data\" /grant DefaultAppPool:(OI)(CI)",
}
}
}
The name of the .config file matches the applicatio name in AWS, but I also read somewhere that the name does not matter, as long as it has the .config extension.
Has anyone successfully done something like this? Any pointers appreciated.
Rather than trying to fix permission issues writing to the local storage within AWS Elastic Beanstalk, I would instead suggest using something like Amazon S3 for storing files. Some benefits would be:
Not having to worry about file permissions.
S3 files are persistent.
You may run into issues with losing local files when you republish your application.
If you ever move to using something like containers, you will lose the file every time the container is taken down.
S3 is incredibly cheap to use.
I am building a web-app using Elastic beanstalk on AWS and was wondering if there was a way that I can edit the source code without having to re-upload a zip of my application every time I want to make an edit.
Thanks!
The Elastic Beanstalk environment is based on EC2 instances. You can connect to your instances using SSH and, inside the instance, download your source code. If you use a non compiled language, like Javascript (Node) or Python you can edit the code directly. If you use Java you will need to upload the source code and compile it. Maybe using the environment JDK.
But keep in mind two details:
You must install your compiled/edited code in the same path used by elastic beanstalk;
If your instance is reinitialized, your changes will be lost, because in this case eb will get a fresh copy of your code based on your last upload.
How can I make a fresh EC2 instance which is just booted from auto scaling group grab and save sensitive .env data such as db credentials, encryption keys etc'?
You can configure that in user data settings when you are creating a new instance. Check this screenshot below.
You can add a curl command there. Some thing like this.
curl -o <directory you need to copy>/* https://s3.amazonaws.com/bucket-name/test.dev
An option could be the User Data section on the Advanced Details accordion (bottom of the page) during the launch config creation, but I recommend to use git (or similar) to download the config/other sensitive data, because injecting an ENV variable during the startup which contains the name of the branch (which could act also as a stage name) will make the whole stuff to work on environment/stage independent, and you need to maintain only one AMI.
Another option could be a docker image build, and during the startup of the container (or during the image build), you can also inject ENV variables (or mount an Elastic File System to the container) which are containing the config elements.
A third option could be to put together a CloudFormation stack, which will configure the start/update of the stack to pull the config elements to the stack.
The fourth option is to use OpsWorks (Chef) to do the configuration updates on a regular period (cron/service) or by triggered automation (some CI/CD tools like Jenkins, Bamboo, Travis could do this easily).
I'm trying to automate the process of deploying code using github and jenkins job to deploy my Springboot Application on AWS .
I want to know where should I place the application.properties file in case I m deploying a war file on Tomcat and don't want this file to be pushed onto github as it may contain some database credentials , not to be exposed.
Should I put separate application-prod.properties file in Tomcat (AWS) so that my war file will be independent of these properties ?
See my answer here.
In a nutshell, you externalise the properties and then pass one or more profiles that will activate one or more Spring Configuration classes. Each Configuration class will load one or more property file. In your case, if you only have one environment, you can just create a configuration file for one profile.
Then, on your AWS instance, you will deploy the configuration file separately. At runtime, you will need to pass to your Spring Boot application the active profile(s). You can do this by passing the VM argument: -Dspring.profiles.active=[your-profile]
I'm completing the final lectures on an online course that shows how to create from scratch a Spring Boot website with Thymeleaf, Spring Security, Email and Data JPA, how to process credit card payments with Stripe and how to deploy to AWS. You can register your interest here.
how about using spring-cloud-starter-config instead of local properties ?
If using spring-cloud-start-config, all configurations should be loaded from your config-center instead of reading them locally.
Even if you have multiple different environments, spring-cloud-starter-config could handle it with different profiles.
What's more, spring-cloud-starter-config could use local environment variables too.
By the way, the only local resource could be bootstrap.yml if you are using spring-cloud-starter-config.
Wish I can help you!
I am encountering what I believe to be permission issues when trying to deploy a Drupal application onto Elastic Beanstalk
I followed this tutorial to get Drupal up and running: http://comm-press.de/en/blog/drupal-climbs-aws-elastic-beanstalk
I am using a Postgres database and I am entering the correct credentials when filling out the forms on install.php, without error.
Any subsequent deploys after the initial deploy brings me back to install.php. After entering in my database information, I get this message, telling me Drupal is already installed (which it is).
http://i.imgur.com/N6KDvvo.png
Why does my site get redirected to install.php after 'eb deploy'? What permissions should I set my drupal folder such that settings.php and /sites/default/files is generated?
The install state is controlled by the DB-- if Drupal bootstraps with no DB information, you are routed to the code that asks you for it.
I was able to bypass this part by setting up a AWS RDS DB and connecting all instances to it.
--But, wait, there's more. Now that having all the instances reading from the same DB has squashed most of the concurrency problems between instances*. Go ahead, try and add a photo to your admin profile. I will wait. Yep, most of the time you'll get the wrong instance and the single image on one instance is not shown on all instances.
I am working on solving that problem with a start up & cron job script that loads updates to resources from the AWS S3 service.
Step A load code into S3
Step B set an accessible timestamp for $lastModified to now()
Step 1 wget/curl a timestamp of the last remote modification ($lastModified)
Step 2 compare the local last updated stamp ($lastUpdated) to remote last modified timestamp
Step 3 if ($lastModified == $lastUpdated) {die} else {update incremental changes && set $lastUpdated = $lastModified}
Watch that first incremental update, it's a doozy.
So... additionally I should mention that we are installing completely vanilla drupal when we instantiate an image, as part of the Docker file from drupal the drupal apache image the last thing the Docker file runs is a setup script.
Elastic Beanstalk sets Environment variables-- some of those variables are the amazon access key id and access secret key.
I curl an IP only available inside of Elastic Beanstalk curl -v 169.254.169.254
From that output in the setup script I can tell if I am local AWS EB or in AWS EB. That allows me to conditionally change certain configurations-- like connecting RDS or a local MySQL DB to Drupal.
The setup script uses aws cli to pull from S3 (sync) to add, replace, update everything in the webroot turning the instance into a setup Drupal installation as far as file level assets go.
sed and service reload is done a lot. Elasticache vs local Redis...
last we start the web server in the foreground && tail -f /dev/null so the container doesn't immediately close.
Drupal is just for static assets pages and a header/menu/footer wrapper for our web app (templates are served..., Twig/JS fills the template in with data). Authentication happens via API-- not even using 90% or so of the goodness in Drupal...
Incremental changes are pulled by comparing hash values and acting to run the update process if they are different.