Host C# web API in AWS? - amazon-web-services

I am just finishing up C# web API and am wondering on the best way to deploy it on AWS. I am aware I could boot up a VM in the cloud and host it in IIS , is this the only way?

Spinning up one or more Ec2 instances is one way, and there is nothing wrong with that approach.
Two other options:
employ elastic beanstalk to automate the deployment for you -
but it still ends up running on EC2 instances.
convert your WebAPI to API Gateway & C#/Lambda functions instead and
run it 'serverless'. Depending on the requirements of your
application this may or may not be a viable option for you.

Related

AWS - What are the exact differences between EC2, Beanstalk and LightSail?

What are the exact differences between EC2, Beanstalk and LightSail in AWS?
What are good real time scenarios in which I should use these services?
They are all based on EC2, the compute service from AWS allowing you to create EC2 instances (virtual machines in the cloud).
Lightsail is packaged in a similar way than Virtual Private Server, making it easy for anyone to start with their own server. It has a simplified management console and many options are tuned with default values that maximize availability and security.
Elastic Beanstalk is a service for application developers that provisions an EC2 instance and a load balancer automatically. It creates the EC2 instance, it installs an execution environment on these machines and will deploy your application for you (Elastic Beanstalk support Java, Node, Python, Docker and many others)
Behind the scenes, Elastic Beanstalk creates regular EC2 instances that you will see in your AWS Console.
And EC2 is the bare service that allows the other to be possible. If you choose to create an EC2 instance, you will have to choose your operating system, manage your ssh key, install your application runtime and configure security settings by yourself. You have full control of that virtual machine.
In simple terms:
EC2 - virtual host or an image. which you can use it to install apps and have a machine to do whatever you like.
Lightsail - is similar but more user friendly management option and good for small applications.
Beanstalk - an orchestration tool, which does all the work to create an EC2, install application, software and give you freedom from manual tasks in creating an environment.
More details at - https://stackshare.io/stackups/amazon-ec2-vs-amazon-lightsail-vs-aws-elastic-beanstalk
I don't know if my scenario is typical in any way, but here are the differences that were critical for me. I'm happier EC2 than EB:
EC2:
just a remote linux machine with shell (command line) access
tracable application-level errors, easy to see what is wrong with your application
you can use AWS web console panel or AWS command line tool to manage
you will need repeated steps if you want to reproduce same environment
some effort to get proper shell access (eg fix security rule to your IP only)
no load balancer provided by default
Elastic Beanstalk
a service that creates a EC2 instance with a programming language of your choice (eg Python, PHP, etc)
runs one application on that machine (for python - application.py)
upload applications as .zip file, extra effort needed to use your git source
need to get used to environment vs applications mental model
application level errors hidden deep in the server logs, logs downloaded in separate menu
can be managed by web console, but also needs another CLI tool in addition to AWS CLI (you end up installing two CLI tools)
provides load balancer and other server-level services, takes away the manual setup part
great for scaling stable appications, not so much for trial-and-see experimentaion
probably more expensive than just an EC2 instance
Amazon EC2 is a virtual host, in other words, it is a server where you can SSH configure your application, install dependencies and so on, like in your local machine. EC2 has a dozen of AMI (Amazon Machine Image: it is some kind of operating system of your EC2 server, for instance, you can have EC2 running on Linux based OS or in windows OS). To summarize, it is a great idea if you need a machine in your hands.
Amazon Lightsail is a simple tool that you can deploy and manage application with small management of servers. You can find it very practical if your application is small, For instance, it will perfectly fit your application if you use Wordpress or other CMS.
AWS Elastic Beanstalk is an orchestration tool. You can manage your application within that service, it is more elevated then AWS Light Sail.
If you still do not understand the differences, you can take a look at each service overview.
There is also an answer in Quora
I have spent only 10 mins on these technologies but here is my first take.
EC2 - a baremetal service. It gives you a server with an OS. That is it. There is nothing else installed on it. So if you need a webserver (nginx) or python, you'll need to do it yourself.
Beanstalk - helps you deploy your applications. Say you have a python/flask application which you want to run on a server. Traditionally you'll have to build the app, move the deployable package to another machine where a web server should be installed, then move the package into some directory in the web server. Beanstalk does all this for you automatically.
LightSail - I haven't tried it but it seem to be an even simpler option to create a server with pre-installed os/software.
In summary, these seem to make application deployment more easier by pre-configuring the server/EC2s with the required software packages and security policies (eg. port nos. etc.).
I am not an expert so I could be wrong.

Communication Between Microservices in Aws Ecs

I'm having troubles with the communication between microservices. I have many spring boot applications, and many requests HTTP and AMQP (RabbitMQ) between them. Locally (in dev) I use Eureka (Netflix Oss) without Docker Images.
The question is: in the Amazon ECS Infraestructure how i can work with the same behavior? Which is the common pratice for the communication between microservices using Docker? I can still use Eureka for Service Discovery? Besides that, how this comunication will works between container instances?
I'd suggest reading up on ECS Service Load Balancing, in particular two points:
Your ECS service configuration may say that you let ECS or the EC2 instance essentially pick what port number the service runs on externally. (ie for Spring Boot applications inside the Docker container your application thinks it's running on port 8080, but in reality to anything outside the Docker container it may be running on port 1234
2 ECS clusters will check the health endpoint you defined in the load balancer, and kill / respawn instances of your service that have died
A load balancer gives you different ways to specify which application in the cluster you are talking to. This can be route based or DNS name based (and maybe some others). Thus http://myservice.example.com/api could point to a different ECS service than http://myservice.exaple.com/app... or http://app.myservice.example.com vs http://api.myservice.example.com.
You can configure ECS without a load balancer, I'm unsure how well that will work in this situation.
Now, you're talking service discovery. You can still use Eureka for service discovery, having Spring Boot take care of that. You may need to be clever about how you tell Eureka where your service lives (as the hostname inside the Docker container may be useless, and the port number inside the container totally will be too.) You may need to do something clever here to correctly derive that number, like introspecting using AWS APIs. I think this SO answer describes it correctly, or at least close enough to get started.
Additionally apparently ECS has service discovery built in now. This is either new since I last used ECS, or we didn't use it because we had other solutions. If you aren't completely tied to Eureka for other reasons.
Thanks for you reply. For now I'm using Eureka because I'm using Feign too for comunication between microservices.
My case is this: I have microservices (Example A,B,C). A communicates with B and C by way of Feign (Rest).
Microservices Example
Example of Code on Microservice A:
#FeignClient("b-service")
public interface BFeign {
}
#FeignClient("c-service")
public interface CFeign {
}
Using ECS and ALB it's possible still using Feign? If yes or not, how you suggest I do this?

Sails app with multiple instances on AWS - Redis/Elasticache/ALB

I'm building a Sails app that is using socket.io and see that Sails offers a method for using multiple servers via redis:
http://sailsjs.org/documentation/concepts/realtime/multi-server-environments
Since I will be placing the app on AWS, preferably with ELB (elastic load balancer) and autoscale group with multiple EC2 instances was wondering how I can handle so it doesn't need a separate redis instance?
Maybe we can use AWS Elasticache? If so - how would this be done?
Now that AWS has released the new ALB application load balancer which has websockets, could this be used to help simplify things?
Thanks in advance
Updates for use-cases in application
Allow end-user to update data dynamically from their own dashboard
and display analytics/stats in real-time to an administrator
Application status' to change based on specific timings eg. at a
given start date/time the app allows users to update data.
Regarding your first question, you don't want to run Redis on the same servers that Sails is running on, especially if you are using AutoScaling. The Redis server needs to be a separate server that won't disappear if your environment experiences a "scale-in" event. So Redis is going to have to be on a separate "server" somewhere.
ElastiCache is just separate EC2 instances, running Redis, where AWS handles most of the management for you to the point that you can't even SSH into the instance. It's similar to how RDS works. ElastiCache will certainly work for your scenario. You might also want to look at the third-party service RedisLabs which also manages Redis instances on AWS for you.
Regarding your second question, an Application Load Balancer will have no bearing on your Redis usage. It will however bring actual support for WebSockets which it sounds like you are using. So yes, you should be using an ALB instead of an ELB.

Hosting web services project in Amazon

Hi We have built a java based web services project with using jboss server. How do I host this application with Amazon cloud? This web services act as back end for a mobile android app.
I am looking for PaaS option of Jboss server and Postgres database. I could create a postgres database. But could not find Jboss server.
My understanding is in PaaS, Jboss and Postgres should be able to scale up itself as per demand.
Another option provided by Amazon is EC2 as far as I have understood. But if I go with EC2, I will have install and set up jboss and postgres on my own. Then does it scale up by itself as per demand?
Please guide.
If you want to deploy your web application to AWS and ensure its scalability, you have basically two options:
EC2 instance [IaaS] - The disadvantage is, as you mentioned in your question, that you have to configure everything manually. Some external mechanism for scaling has to be used. Amazon provides its AutoScaling service which can be configured to launch new EC2 instances based on utilization or some other metric.
Elastic Beanstalk [PaaS] - This service has the auto-scaling already built in and manages the EC2 instances with your application on its own (it takes care about launching them, deploying the app etc). The disadvantage is that JBoss server is not support at the moment (you would have to switch to Tomcat).
There is a way, how to make JBoss work on Elastic Beanstalk, however. ELB has newly added the support for Docker so if you make your JBoss API run in Docker, you can deploy it to ELB and scale it without much effort and configuration.
As for the database, mentioned in your question, Amazon has plenty of choices, Postgres included, in their RDS service.

Deploy and manage WebApp with AWS services

I’m noob with AWS services, I develop web application with Ruby on Rails, so, I’ll like to know what could be the best way or the right one to deploy and manage web application with AWS.
Right now there are bunch of services of AWS for handle web apps, but I’m not sure which service to use, OpsWork, EC2 (setup the entire server), Elastic Beanstalk or EC2 Containers and so on…
Well, I have 3 small apps from diferentes clients and I’m looking the right way to have them on one instance or couples of instances, right know i’m with OpsWorks, I have 3 stack, one for each web app, I want to know if I can deploy and manage those apps in one stack and 2 instance of OpsWorks or there are better way or other services as IaaS or PaaS solutions?. So i’m looking for advise or orientation for use AWS service for those kind of thing.
This question is rather vague and the answer depends on the needs of your app, but I'll give my 2 cents regardless. I have several rails apps hosted on EC2 instances running Ubuntu, NGINX, and Phusion Passenger. The apps that receive a decent amount of traffic and require consistent performance/availability are cloned across multiple EC2 instances (in multiple zones) and have traffic managed by Elastic Load Balancers (ELBs). The app databases are served through amazon's RDS services. Domain registration and nameservers are set up through AWS Route 53. Static assets are served from AWS S3.
This type of architecture certainly has a price tag on it and isn't the only way to do it. My experience has been that all of my older Rails apps have survived over a year with 100% uptime and rarely have moments of slowness been the fault of AWS as opposed to my own code or 3rd-party software.
Hope this helps; feel free to ask questions.