EC2 Instance Meta Data on Development Machines - amazon-web-services

On an EC2 instance, you can get meta data from a 'local' web server by doing things like:
GET http://169.254.169.254/latest/meta-data/
EC2 also does API key cycling and other magic here. Has anyone built a webserver to duplicate/mock this on development VM's? My Google keywords are failing me here.
Thanks.

Announcing Hologram!
Hologram: a system for bringing EC2 Role-like key management to non-EC2 hosts. Hologram exposes an imitation of the EC2 instance metadata service on developer workstations that supports the temporary credentials workflow.

Related

SSL Install on AWS

I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.

Cannot login to Neo4j - AWS EC2 Marketplace

So I am starting up a new neo4j instance in aws ec2 using neo4j community edition from the aws marketplace: https://aws.amazon.com/marketplace/pp/B071P26C9D
The machine comes up and I set up an elastic public IP. However, when I try logging in with the default credentials neo4j/neo4j through the web neo4j browser, I get an invalid credential error.
Anyone know if the creds are different when using the marketplace ami?
Well, this is kinda embarrassing. But I just realized that there is a tab labeled usage instructions under the ec2 instance. The text under the tab instructs me to use the instance id of the ec2 instance as the default password and that worked.

HashiCorp Vault - Setup / Architecture in Production

I'm getting ready to setup HashiCorp Vault with my web application, and while the examples HashiCorp provides make sense, I'm a little unclear of what the intended production setup should be.
In my case, I have:
a small handful of AWS EC2 instances serving my web application
a couple EC2 instances serving Jenkins for continuous deployment
and I need:
My configuration software (Ansible) and Jenkins to be able to read secrets during deployment
to allow employees in the company to read secrets as needed, and potentially, generate temporary ones for certain types of access.
I'll probably be using S3 as a storage backend for Vault.
The types of questions I have are:
Should vault be running on all my EC2 instances, and listening at 127.0.0.1:8200?
Or do I create an instance (maybe 2 for availability) that just run Vault and have other instances / services connect to those as needed for secret access?
If i needed employees to be able to access secrets from their local machines, how does that work? Do they setup vault locally against the S3 storage, or should they be hitting the REST API of the remote servers from step 2 to access their secrets?
And to be clear, any machine that's running vault, if it's restarted, then vault would need to be unsealed again, which seems to be a manual process involving x number of key holders?
Vault runs in a client-server architecture, so you should have a dedicated cluster of Vault servers (usually 3 is suitable for small-medium installations) running in availability mode.
The Vault servers should probably bind to the internal private IP, not 127.0.0.1, since they they won't be accessible within your VPC. You definitely do not want to bind 0.0.0.0, since that could make Vault publicly accessible if your instance has a public IP.
You'll want to bind to the IP that is advertised on the certificate, whether that's the IP or the DNS name. You should only communicate with Vault over TLS in a production-grade infrastructure.
Any and all requests go through those Vault servers. If other users need to communicate with Vault, they should connect to the VPC via a VPN or bastion host and issue requests against it.
When a machine that is running Vault is restarted, Vault does need to be unsealed. This is why you should run Vault in HA mode, so another server can accept requests. You can setup monitoring and alerting to notify you when a server needs to be unsealed (Vault returns a special status code).
You can also read the production hardening guide for more tips.
Specifically for point 3 & 4:
They would talk to the Vault API which is running on one/more machines in your cluster (If you do run it in HA mode with multiple machines, only one node will be active at anytime). You would provision them with some kind of authentication, using one of the available authentication backends like LDAP.
Yes, by default and in it's recommended configuration if any of your Vault nodes in your cluster get restarted or stopped, you will need to unseal it with however many keys are required; depending on how many key shards were generated when you stood up Vault.

Setting up an Amazon Server with Go Daddy

I am trying to set up an Amazon Server to host a dynamic website I'm currently creating. I have the domain bought on GoDaddy.com, and I believe that what I've done so far has linked the domain to my Amazon account.
I followed this tutorial : http://www.mycowsworld.com/blog/2013/07/29/setting-up-a-godaddy-domain-name-with-amazon-web-services/
In short, this walked me through setting up and Amazon S3 (Simple Storage Service) and Amazon Route 53. I then configured the DNS Servers, and my website now launches properly on the domain.
I'm not sure on the next step from here, but I would like to set up:
-A database server
-Anything else that might be necessary to run a dynamic website.
I am very new to hosting websites, and semi-new to web development in general, so the more in depth the better.
Thanks a lot
You have two options on AWS. Run an EC2 server and setup your application or continue to use the AWS managed services like S3.
Flask apps can be hosted on Elastic Beanstalk and
your database can be hosted on RDS (Relational Database Service). Then the two can be integrated.
Otherwise, spin up your own t2.micro instance in EC2. Log in via ssh and set up the database server and application like you have locally. This server could also host the (currently S3 hosted) static files too.
I have no idea what your requirements are, personally I would start with setting up the EC2 instance and go from there as integrating AWS services is without knowing what you need is probably not the easiest first step.
Heroku might be another option. They host their services on AWS and give you an end to end solution for deploying and running your python code without getting your hands dirty setting up servers.

Amazon ElastiCache vs Ramfs in Linux

I am new to Amazon Web Services. I was reading about Amazon ElastiCache and wanted to clarify if it is like (may be more than that) using RAM filesystem in Linux where we use a portion of system memory as a file system. As I referred AWS documentation it says ElastiCache is a web service. Is it like an EC2 instance with few memory modules attached? I really want to understand how it exactly works.
Our company has decided to migrate our physical servers into AWS cloud. We use Apache web server and MySQL Database running in Linux. We provide a SaaS platform for e-mail marketing and event scheduling for our customers. There is usually a high web traffic to our website during 9am-5pm on weekdays. I would like to understand if we want to use ElastiCache service, how it will be configured in AWS.? We have planned two EC2 instances for our web server and an RDS instance for the database.
Thanks.
Elastic cache is simply managed Redis or Memcached. Depending which one you choose, you would use the client for the cache with your application.
How you would implement it depends on what kind of caching you are trying to accomplish.