I'm redoing a badly built web application that my company uses in python/django (after deciding it was the best tool for the job).
I don't have much time to spend on development, which means I have even less time to get it deployed, and since its resource intensive and will be used by a lot of people concurrently, I'd like to be able to take advantage of all the tools that AWS offers, such as RDS, ElastiCache, CloudWatch, and especially any auto scaling tools.
I've seen Heroku and liked it, but I would prefer to use AWS, and the price seems quite high.
I don't mind getting my hands dirty as long as it doesn't take half the development time setting up deployment.
I'm looking for something we can use, whether it be a service or AMI so that we can deploy automatically from our repository, without spending days configuring it and figuring out how to get it working, and without drastically increasing the price to host our app.
As you want something quick and simple, maybe consider RightScale's ServerTemplates to get you up and running quickly. RightScale have a free developer account. There are a few Django ServerTemplates and they are all priced for "All Users", so they'll work with the free developer account.
That will get you a base application stack quickly.
Next, I'd look into using fabric (similar to capistrano) and/or github post-commit hooks to automate deployment of your application.
Once you're happy with that and have more time on your hands you could look at adding all the other stuff you want to use (ElastiCache, etc).
Heroku runs on AWS: http://devcenter.heroku.com/articles/external-services
So, you can use AWS services from Heroku as any EC2 instance. If really wanting it, set Heroku for hard-to-setup services and some little AWS EC2 instance for I-do-myself services.
To automate the deployment you can use a 3rd party tool like capistrano or http://nudow.com. Capistrano will do a lot of the deployment but you have to host it yourself and you have to do the deployment in a specific way for it to work correctly (such as using the same keys everywhere, etc). Nudow.com is easier to setup and is hosted. It will deploy to your existing infrastructure and will do stuff like versioning. Also it has a lot of tools to do things like minimizing javascript/css and uploading to cloudfront.
Related
Hi I am using the serverless framework to develop my application and I need to set it up in a local environment I am using API gateway, Lambda, VPC , SNS, SQS, and DB is connected via VPC peering, currently, everytime I am deploying and testing my code and its tedious process and takes 5 mins to deploy, Is there any way to set up a local environment to have everything in one place
It should be possible in theory, but it is not an easy thing to do. There are products like LocalStack that offer exactly this.
But, I would not recommend going that route. Ultimately, by design this will always be a huge cat and mouse game. AWS introduces a new feature or changes some minor detail of their implementation and products like LocalStack need to catch up. Furthermore, you will always only get an "approximation" of the "actual cloud". It never won't be a 100% match.
I would think there is a lot of work involved to get products like LocalStack working properly with your setup and have it running well.
Therefore, I would propose to invest the same time into proper developer experience within the "actual cloud". That is what we do: every developer deploys their version of the project to AWS.
This is also not trivial, but the end result is not a "fake version" of the cloud that might or might not reflect the "real cloud".
The key to achieve this is Infrastructure as code and as much automation as possible. We use Terraform and Makefiles which works very well for us. If done properly, we only ever build and deploy what we changed. The result is that changes can be deployed in seconds to AWS and the developer can test the result either through the Makefile itself or using the AWS console.
And another upside of this is, that in theory you need to do all the same work anyway for your continuous deployment, so ultimately you are reducing work by not having to maintain local deployments and cloud deployments.
I'm presently looking into GCP's Deployment Manager to deploy new projects, VMs and Cloud Storage buckets.
We need a web front end that authenticated users can connect to in order to deploy the required infrastructure, though I'm not sure what Dev Ops tools are recommended to work with this system. We have an instance of Jenkins and Octopus Deploy, though I see on Google's Configuration Management page (https://cloud.google.com/solutions/configuration-management) they suggest other tools like Ansible, Chef, Puppet and Saltstack.
I'm supposing that through one of these I can update something simple like a name variable in the config.yaml file and deploy a project.
Could I also ensure a chosen name for a project, VM or Cloud Storage bucket fits with a specific naming convention with one of these systems?
Which system do others use and why?
I use Deployment Manager, as all 3rd party tools are reliant upon the presence of GCP APIs, as well as trusting that those APIs are in line with the actual functionality of the underlying GCP tech.
GCP is decidedly behind the curve on API development, which means that even if you wanted to use TF or whatever, at some point you're going to be stuck inside the SDK, anyway. So that's why I went with Deployment Manager, as much as I wanted to have my whole infra/app deployment use other tools that I was more comfortable with.
To specifically answer your question about validating naming schema, what you would probably want to do is write a wrapper script that uses the gcloud deployment-manager subcommand. Do your validation in the wrapper script, then run the gcloud deployment-manager stuff.
Word of warning about Deployment Manager: it makes troubleshooting very difficult. Very often it will obscure the error that can help you actually establish the root cause of a problem. I can't tell you how many times somebody in my office has shouted "UGGH! Shut UP with your Error 400!" I hope that Google takes note from my pointed survey feedback and refactors DM to pass the original error through.
Anyway, hope this helps. GCP has come a long way, but they've still got work to do.
I hoping I can get some insight into the AWS Elastic Beanstalk environment before investing considerable time into switching projects to the AWS platform.
I am an entry level web developer so I do "A LOT" of A/B testing on very minor changes. It is my understanding that I have to upload an entire package or application to AWS ESB. Is that correct or can up change a single file, upload it, and test it?
I have a handful of media sites (php / MySQL) that occasionally go viral so I like the idea of auto scaling.
If I have to upload an entire application each time, what strategy do you recommend for someone that needs to be able to do regular small A/B tests of their code?
Thanks for you help!
Todd
How often and how large are the changes done for the A/B testing? Per https://aws.amazon.com/getting-started/tutorials/update-an-app/ the application does need to be re-deployed if the source code changes.
If you want the ability to edit files live in production; that...is not exactly best practice. In fact, it is very frowned upon. Best practice is to make the chance in DEV, push the application to an automated process that deploys the application. Refer to logging and monitoring tools to watch production. A/B testing is very easy to rationalize using CloudWatch logs and a visualizer like grafana.
A couple other options: configure Lightsail, Heroku, or other PaaS providers.
Hope this helps.
Ok, so I would like to build my application in a way that allows for each organization to get their own instance.
My way of thinking here, is that I could do something with AWS or digital ocean or whatever to deploy my java (dropwizard) application every time a new client registers their company with us.
This would be virtualized, I would be hoping, so I would have those instances running on various virtual servers.
Basically, when a company registers... I would like to spin up an instance of the core API, and an instance of the DB server (or the two could be one instance here, I guess)
Is this a thing? I would google it, but I am not fully sure what to be looking for!
I know this is not a dropwizard question - but I tagged it this way because it is a dropwizard application I am building - and I figure people in that community may have had similar concerns! Please feel free to edit!
You would need to automate the process of spinning up an environment using something like CloudFormation, Ansible, Terraform, Chef, Puppet, etc. There are a lot of tools in this space. These tools are called Infrastructure as Code (IaC). Once you have it automated, setting up a new environment for a new customer would be a simple task of kicking off the appropriate script.
We have our production site in Elastic Beanstalk. SNS notifications is good feature to keep us updated about the environment status whenever it changes. But, we want to watch the production environment logs closely.
Our project is a java webapplication, we want to check the status of the production environment from other beanstalk environments i.e., beta and staging environment which are also in the same region and within the same application.
Our goals are to
use aws sdk or other aws tools to get the production beanstalk tomcat logs and display in our beta site on some page.
Run some tool periodically from the Beta environment on Live environment. Which basically does the testing of the sites, i.e., whether all code level mappings are good, if any exceptions then email them.
if we break down the point2 into further more -
We have quartz scheduler to schedule a job at a particular time. We are planning to add some script which test the complete environment periodically. Are there any Beanstalk built in tools that tests the complete site, accessing all URLS and testing the DB to java serialize object classes mappings (hibernate mappings) etc.,
We do use S3 elastic beanstalk bucket to check tomcat logs, but would like to implement the step1 & step2 if possible.
--
Thanks
For Item #1:
I don't recommend using beta and dev to watch production. Instead, here's what I'd do:
Setup Pingdom on all the three environments, so we could have a close eye on uptime
Review the Logging Code. Do you have a explicitly pattern/idiom for exception handling in place? Are your logging functioning?
Setup Papertrail with Logback. Why? You'll have realtime aggregate log tailing for each and every machine you setup a syslog receiver for. For beanstalk-maven-plugin, we are about to release an archetype (see an example 'blank' project created out of it). Even if you're not using, its worth it so see how to use it.
Setup Log Rollout to S3. As it is, the usage is quite useless. I suggest you work into something to import for analysis (or better yet: Export for usage from Hive - Which is something Papertrail Does)
Define your Health Check Code accordingly. Think about what could go wrong, in terms of dependencies
Look / Set up some CloudWatch metrics. If you application is heavy and you're on a t1.micro, which conditions it would spike? Use that at your advantage
That are just a few ideas.
w/r/t Item #2:
I suggest you rethink your structure. I actually dislike the idea of using crontab in elastic beanstalk servers, since its error prone (leader_only? Managing output?). Instead, I use my newer favourite crontab webapp - Jenkins, and set up an integration testing / smoke testing artifact, with only the relevant bits to remotely test the instance. Selenium might help, but I guess if your Services are critical, you might be more happy relying in rest-assured, for instance.
Hope it helps.