Deploy AWS Amplify Web App to EC2 (Not Lambda) - amazon-web-services

I recently realised my NEXT JS project I deployed on AWS Amplify uses Lambda but I need to deploy it on EC2. Is this possible at all?
I'm new to this whole thing so excuse the ignorance but for certain reasons I need to use EC2?
Is that possible?
Thanks

AWS EC2 is a service that provides all the compute, storage, and networking needs you may have for any application you want to develop. From its site:
Amazon EC2 offers the broadest and deepest compute platform with a choice of processor, storage, networking, operating system, and purchase model.
Source
Basically, you can create any number of virtual machines, connected among themselves and to the Internet however you like; and use any data persistence strategy.
There are many things to unpack when using EC2, but to start, I would suggest that you learn how to set up an EC2 instance using the default VPC that comes with your account. Be sure to configure the instance to have a public IP so you can access it through the Internet. Once inside, you can deploy your application however you like and access it through your public IP.
Before moving on, trying to decide why you need your app to run on EC2, Lambda is a SaaS (Software as a Service) product, meaning that all of the service provider's infrastructures are managed. On the other hand, EC2 is an IaaS product (Infrastructure as a Service) which means that you have to handle most of the infrastructure.

Related

How does Amazon configure and manage the EC2 instances for RDS and similar services?

Many different AWS services use EC2 instances and you can understand that from the pricing pages.
Basically it's a multi-instance architecture (and not the more familiar multi-tenant approach that I personally use for most web applications).
When an AWS customer creates a new resource, internally AWS has to spin up a new EC2 instance, configure it, monitor its status and apply security patches and updates.
Does anyone know how do they connect to the VM to configure it?
Do they use SSH to connect or another protocol?
Or they use some kind of agent installed on the VM on first installation in order to apply the updates and changes?
Note: this question doesn't want to discuss the details of managing a database, I just want to know how AWS applies and updates the configuration of the EC2 instances when they offer a "managed" service (any service).

how to create table automatically in aws aurora serverless with serverless framework

I'm trying to create table automatically with npm migrate whenever we deploy any changes with serverless framework. It's quite fine when I used with aurora database. But I've moved to Aurora Serverless RDS (Sydney region), it's not working at all. Because Aurora Serverless RDS itself is working inside VPC, thus when we need to access it lambda function should must be at same VPC.
PS: we're using Github Action as pipeline to deploy everything to Lambda.
Please let me know how to solve that issue, thanks.
There are only two basic ways that you can approach this: open a tunnel into the VPC or run your updates inside the VPC. Here are some of the approaches to each that I've used in the past:
Tunnel into the VPC:
VPN, such as OpenVPN.
Relatively easy to set up, but designed to connect two networks together and represents an always-on charge for the server. Would work well if you're running the migrations from, say, your corporate network, but not something that you want to try to configure for GitHub Actions (or any third-party build tool).
Bastion host
This is an EC2 instance that runs in a public subnet and exposes SSH to the world. You make an SSH connection to the Bastion and then tunnel whatever protocol you want underneath. Typically run as an "always on" instance, but you can start and stop programmatically.
I think this would add a lot of complexity to your build. Assuming that you just want to run on demand, you'd need a script that would start the instance and wait for it to be ready to accept connections. You would probably also want to adjust the security group ingress rules to only allow traffic from your build machine (whose IP is likely to change for each build). Then you'd have to open the tunnel, by running ssh in the background, and close it again after the build is done.
Running the migration inside the VPC:
Simplest approach (imo) is to just move your build inside the VPC, using CodeBuild. If you do this you'll need to have a NAT so that the build can talk to the outside world. It's also not as easy to configure CodeBuild to talk to GitHub as it should be (there's one manual step where you need to provide an access token).
If you're doing a containerized deployment with ECS, then I recommend packaging your migrations in a container and deploying it onto the same cluster that runs the application. Then you'd trigger the run with aws ecs run-task (I assume there's something similar for EKS, but haven't used it).
If you aren't already working with ECS/EKS, then you can implement the same idea with AWS Batch.
Here is an example on how you could approach database schema migration using Amazon API Gateway, AWS Lambda, Amazon Aurora Serverless (MySQL) and Python CDK.

One Website to be connected with Multiple EC2 Instance?

I am new to AWS, i am already having a godaddy VPS server, but my application is very slow when i hosted it in goDaddy VPS.
So i migrated to AWS, now my application works very fast, but some times the EC2 instance is getting failed and it automatically restarts after some times. since my application is basically an on-demand service app, these instance failure causes me to lose some conversations. So i heard about load balancing service from amazon, if one instance failed automatically turns the traffic to other instance.
I have used ubuntu 16.04 instance with vestaCP to host my application in AWS EC2. So is it possible to use the storage of my current-master EC2 instance with a new-alternative instance? so that same datas and database will be used by both the EC2 instances.
Might my question looks funny, but i need to know whether its possible or not? if possible any tutorials! if its not possible what kind of services need to use AWS load balancer to handle high traffic and instance failure.
Thanks
If you are migrating from a more conventional hosting to a cloud provider but you don't adopt a cloud architecture, you are missing out many of the benefits of the cloud.
In general, for a highly available, highly scalable web application, having shared data locally is an anti-pattern.
A modern web application would separate state (storage) from processing. Ideally your instance would hold only configuration and temporary data. For the database, assuming you are using a relational database, you would start a RDS instance. For the files, if they are mainly things like images and static content, you would probably use The Simple Storage Service, S3.
Your EC2 instance would connect to the RDS database and S3. Since the data is not local to the instance anymore, you can easily have multiple instances all using the same storage.
Your EC2 instances could be configured with autoscaling, so AWS would automatically add or remove instances responding to the real traffic you are seeing.
If you have complex storage needs and S3 is not enough for the file layer (and for most applications S3 should suffice), you can take a look at the Elastic File System.
Yes, It is achievable through ELB of AWS. But you have mention for separate requirement of ec2 instance, there is no need of such as AWS ELB manages all this for you.
Note: Always keep your database on another instance like 'AWS RDS' featuring data backup, rollback and if one instance fails then another instance have access to database. Same for files should be stored on 'AWS S3' then only you can achieve load balancing.
For more information.
link

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

How to know if i need to use AWS Elasticache and AWS Elastic Load Balancing?

I' m a little bit confused with caching.
So let's say i want to build a simple chat website with users login/register and photo uploading.
I plan to save uploaded files in amazon s3.
user data in dynamodb.
So which type of data should i put in elasticache to improve my website performances ?
Another question, if i use elastic beanstalk, should i need to use elastic load balancing along ?
I read that Elastic beanstalk was an automated version of EC2 so no need to take care about manual processes, so is this involving ELB ?
Thanks for helping
I think you are confused about terms. The technologies you are talking about are totally different things.
Elasticache is a managed Memcached/Redis solution. You use it for caching purposes, not for persistent data
Elastic Load Balancing is a managed Load Balancing solution. Like Haproxy. If you want to leverage high availability and scalability features of AWS you need it. You create an autoscaling group which spawns or kills EC2 instances whenever it is needed (according to rules you will set up) and autoscaling groups attaches or detaches EC2 instances from ELB. If you install your application only to one instance you do not have to worry about it, but if you are doing like that you're doing wrong, it is not how AWS works. Just go and use a cheaper and easier VPS company.
Elastic Beanstalk is a wrapper service. It is intented to abstract the complexity of all these EC2 stuff. It is like Heroku or Google App Engine, you give it your application file (or docker image) and it just installs everything for you.
If you are new at AWS I'd recommend you to start with Elastic Beanstalk, understand how it works under the hood and which type of resources will create for you. Once you learn the basics, you can create your own stack and customize it more. But Elasticbeanstalk is also production-ready product. You can trust in it.