Should I use Serverless Computing [closed] - amazon-web-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am creating a MVP(Minimum Viable Product) that has a nodejs server using express for a rest api and a socket.io connection for chat features.
My concern is not so much about cost or scalability, but about setup time/maintenance as this is an MVP. Would serverless or not serverless take less time to setup/maintain on AWS?

Serverless is a great choice is you want to set up a simple REST API application. Using Express would also be a good choice.
API Gateway and Serverless also now supports websockets, so it should be pretty easy to create a websocket application. When it comes to socket.io, however, you will need to do a bit of research before diving in.
Websocket support on API Gateway is a relatively new concept, and there aren't too many resources online on it. The combination with Lambda can be a little difficult to grasp at first. As for socket.io there are even less.
I personally recommend running a EC2 instance running socket.io for your MVP. I think it'd be easier.

There are several reasons to choose a serverless infrastructure over non-serverless. In many cases these align very closely with 5 Pillars of the AWS Well-Architected Framework. Serverless architectures offer great:
Reliability - no need to guess capacity, can easily scale horizontally with demand
Efficiency - tremendously reduced costs for intermittent and infrequent workloads
Maintenance - nonexistent
Availability - highly available and fault tolerant
While your proposed project does appear to fit well within the FaaS framework (infrequent and unpredictable workload with low resource requirements), the disadvantages of serverless, notably the more complex and difficult to test architecture and vendor lock-in can make it challenging to rapidly prototype and deploy a MVP.
As your product favors an engineering tradeoff toward time to market, a non-serverless approach will most likely enable you to release a MVP quickly with minimal headache

There are some serverless frameworks that work on AWS lambda function. By my realworld experience, there are some notices on each of them:
AWS Amplify (https://docs.amplify.aws) a full stack serverless solution for developers. It's quite easy to use at the begining. Cons. Is that over the time your maintainance cost is higher on deployment part. It's very slow to deploy a stack on AWS when you just need to change a piece of code. It will download all the stack files to local then upload again...
Serverless framework (https://www.serverless.com) is less complex, rich plugins and nano function oriented. The downside of this framework is every code function will be used the same across lambdas. When the project is bigger, the code size is bigger therefore your lambda's cold start is slower.
Simplify framework (https://github.com/simplify-framework/codegen) sounds a lightweight but rich functionalities. It allows you build your CI/CD inside your project. This concept is similar with AWS CDK has just released a couple of days in May. You design your API using OpenAPI specs (swagger specs) that will be reused and standardized your APIs across tools and processes. No vender lock-in has been desiged carefully. Today you are in AWS but tomorrow it will be on your on-premise servers.
You choose what fits to your solution. There is no one fit all.

Related

How do I handle development and production environments in AWS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Building an app to be launched in production - and unsure how to handle the dev/production environments on AWS.
If I use multiple buckets, multiple DynamoDB tables, multiple Lambda functions, multiple Elastic Search instances, EC2, API gateway - it seems SUPER cumbersome to have a production and a dev environment?
Currently there is only ONE environment, and once the app goes live - any changes will be changing the production environment.
So how do you handle two environments on AWS? The only way I can think of - is to make copies of every lambda function, every database, every EC2 instance, every API and bucket.... But that would cost literally double the price and be super tedious to update once going live.
Any suggestions?
There are couple of approaches. However, regardless of option I have found it best to keep as much infrastructure as code as possible. This gives the maximum flexibility in terms of environment set up and recoverability.
There's the separate account approach
Create a new account
Move all your objects(EC2, S3 etc) into this account
This is easily done if you have the majority of your infrastructure as code and are working out of version control such as git - as you can use AWS Cloudformaton.
Make sure you rename the s3 buckets to something which is globally unique and compliant
You are then running 2 separate instances of everything. However, you can put some cost controls in place such as smaller EC2 instances. Or, you can just delete the entire Cloudformation stack when you are not using it and then spin it up when needed. There is more up front cost in terms of time with this approach, but it can save $$$ in the long run. Plus, the separation of accounts is great from a security perspective.
One account approach
This can get a bit messy but there are several features which can help you split out one account into dev and production.
Lambda versioning. If you are using lambdas you can get versioning and alias. Which in effect means you can have one lambda set up with a production and dev version under the same function name.
API Gate way has 'Stages'. This is effectively environments and you can label one production and one development to split the separations of concern for a single API.
S3 buckets. You can always make a key at the top level of the directory S3://mybucket/prod/ and s3://mybuckey/dev/. It's a bit messy, but better than having everything in the one directory.
However, what you really need to ask is how much does it actually cost to run a second account verses one account for this use case? and the answer is probably close to the same.
That's the advantage of AWS, and cloud computing in general. You only pay for what you use. Running a lambda across two accounts costs the same as running one lambda in a single account but invoking it the exact same amount of times.
The two account approach also gives a lot more clarity into what is going on, and helps prevent issues in production where a development piece of code finds its way in because it is all in one account.
I suggest two AWS accounts. Then make a CloudFormation template that provisions all the resources you need. Once you make the template, it is no longer cumbersome, and having side-by-side environments makes it easy to test code updates before they go live. It is not a good idea to test changes in a production environment.
Yes, this will mean double the costs, but you can always delete the CloudFormation stack in your pre-prod account when you are done testing so there are no idle resources. You just spin them up when you need to test, then spin them down when you are done. So you are only doubling costs for that small window of time when you are testing. And pushing the changes live is just a matter of updating the CloudFormation stack.
These cloud capabilities are one of the big selling points for moving to the cloud in the first place--they can solve the problem you describe without it being cumbersome, but it does take investment in building the CloudFormation template (infrastructure-as-code).

What are the pros/cons using serverless framework vs aws sam? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'd like to choose a framework for building and deploying AWS services and I need to have a full list of pros/cons to justify one framework over the other. Since this forum doesn't want people to just post opinions please provide references with your responses. Also, I'd like to hear from people who have deployed production solutions using any of these frameworks.
If you are looking at building Serverless applications I would select the Serverless Framework. For a couple very large reasons:
The community is a lot bigger. This may not seem like a big deal but with the community contributions constantly improving the framework itself as well as the huge quantity of community plugins to the core framework that extends the functionality out to an enormous amount, it makes it difficult to justify anything else.
Documentation quality is amazing. The Serverless Framework has a huge depth of documentation, everything from reference docs for every feature of the framework to full (and free) courses about building Serverless applications and blog posts with details on best practices. Then there is the examples repos, guides, tutorials ... its pretty awesome!
The ability to use and mix multiple cloud vendors. SAM is AWS exclusive,so if you wanted to potentially create services in other cloud vendors such as Azure or GCP, you would be stuck. But its not just the big boys either; Twilio, IBM Cloud, Cloudflare, Tencent, OpenWhisk and more are all supported.
Free monitoring and management platform. The team at Serverless Inc also produce a pretty stellar SaaS platform at dashboard.serverless.com that provides a lot of the "missing" capabilities needed for application development such as monitoring, debugging, troubleshooting, CI/CD and a bunch more!
Components makes deploying specific use cases a piece of cake. Components is one of the newest projects to come out of Serverless, Inc and promises a shift in how we build Serverless applications that is far more use case driven but also focusses a lot more on the developer experience. Something to definitely keep your eye on.
So yes, I would suggest the Serverless Framework for a lot of really compelling reasons!

How to build complex apps with AWS Lambda and SOA?

We currently run a Java backend which we're hoping to move away from and switch to Node running on AWS Lambda & Serverless.
Ideally during this process we want to build out a fully service orientated architecture.
My question is if our frontend angular app requests the current user's ordered items to get that information it would need to hit three services, the user service, the order service and the item service.
Does this mean we would need make three get requests to these services? At the moment we would have a single endpoint built for that specific request, which can then take advantage of DB joins for optimal performance.
I understand the benefits SOA, but how to do we scale when performing more compex requests such as this? Are there any good resources I can take a look at?
Looking at your question I would advise to align your priorities first: why do you want to move away from the Java backend that you're running on now? Which problems do you want to overcome?
You're combining the microservices architecture and the concept of serverless infrastructure in your question. Both can be used in conjunction, but they don't have to. A lot of companies are using microservices, even bigger enterprises like Uber (on NodeJS), but serverless infrastructures like Lambda are really just getting started. I would advise you to read up on microservices especially, e.g. here are some nice articles. You'll also find answers to your question about performance and joins.
When considering an architecture based on Lambda, do consider that there's no state whatsoever possible in a Lambda function. This is a step further then stateless services that we usually talk about; they generally target 'client state' that does not exist anymore. But a Lambda function cannot have any state, so e.g. a persistent DB-connection pool is not possible. For all the downsides, there's also a lot of stuff you don't have to deal with which can be very beneficial, especially in terms of scalability.

Mashery vs WSO2 vs 3scale [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to know the differences between Mashery, WSO2 and 3scale. Someone who has used API Managers before can give his opinion? What are advantages and disadvantages of each one
thanks
cheers
Not sure, but this question might end up flagged as off topic - vendor comparison, but anyway I'll jump in. I work at 3scale (full disclosure) but hopefully this is useful anyway - the three are pretty different. Trying to be as neutral as possible!:
3scale uses NGNIX and/or open source code plugins to enforce all of the API traffic rules and limits (rate limits, key security, oauth, analytics, switching apps on and off etc.) and the traffic always flows directly to your servers (not via the cloud) so you don't have additional latency or privacy concerns. Because it's NGNIX it's also widely supported, very fast and flexible. Then it has a SAAS backend that manages all the analytics, rate limits, policies, developer portal, alerts etc. + synchronizes across all the traffic manager nodes. It's free to use up to nearly 5million API calls per month.
WSO2's system is an additional module to the WSO2 ESB so if you're using that it makes a lot of sense. It runs everything locally with no cloud components - a pro or a con depending on how you see it. It's also been around a lot less time and doesn't have such a large userbase.
Mashery has two systems - the main one with which the API traffic flows through Mashery's cloud systems first and has traffic management applied there. So there is always a latency heavy roundtrip between the users of the API and your servers + it means Mashery is in your API traffic critical path. They also have an on premise traffic manager but it's much less widely used. Both solutions have very significant costs and long term commitments.
As 3scale what we see as the main advantage is you have a tons of control as to how you set up all the traffic flow and never have to route through a third party plus you have the benefit if having all the heavy lifting hosted and synchronized across multiple data centers. We're also committed to having a strong free for ever tier of service since we want to see a lot of APIs out there! http://www.3scale.net/
Good luck with your choice!
steve.

Messaging in a micro-service architecture [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm beginning to investigate service-oriented architectures and wonder how best to structure the messaging between processes. It seems that direct HTTP calls between services and/or a pubsub bus are two common approaches. In what sorts of situations is one more favorable than the other? I can see how pubsub would lead to more decoupled services but I also get the impression that it becomes much harder to track a message's path though the system.
What are some resources for learning more about this? I'm particularly curious about this in the context of very small, "hand-rolled" services (i.e. Ruby/Sinatra, Node/Express, Redis pubsub, etc.) as opposed to any of the prescribed SOA stacks/suites out there...though I'm sure the same principles apply.
Thanks!
I'll give you my two cents.
but I also get the impression that it becomes much harder to track a message's path though the system.
You're right that pubsub SOA architectures AKA (SOA 2.0) providing a great deal of decoupling, but you also pay a price, because this is exactly what happens, although tools like splunk can help a lot.
seems that direct HTTP calls between services and/or a pubsub bus are two common approaches
Actually if you look at the most used .net event soa frameworks (NServiceBus, Mule and MassTransit) they don't use http calls, but yes you can implement a microservices architecture and use http as the communication protocol.
I understand you want to start applying some of the best enterprise architecture concepts but I would say that you better off starting with simpler, yet stronger foundations. There is no point in you jumping to event soa, without knowing if you really need it. If I was starting a new system and wanted to make sure that I was properly adapting DDD and SOA principles, I would start by identifying the services for my domain. So say you have 3 services, you could start by declaring the public contracts for each of those services, you don't need anything special, you can start with WCF/ASP.NET Web API with a sync REST API. You would then make sure that each service would get its own database, because you're aiming for low coupling, and you could then create an API layer (the one visible to the outside world) again using WCF/ASP.NET Web API, because your microservices should not be exposed directly to the outside world. So at this point you would have a SOA like, yet simple in design, architecture, but because you would have well defined contracts, you could well extend your services by adding async capabilities to them and for that you would start by adding a message queue to each of the services. You know, you don't need to start with a complex system, start with something basic, well defined, which allows you to scale if you have to.
The system I described could be extended to support events easily if you wanted to, and the fact that you would at this point have already sync messages would not stop you from adding asyn messages to the system as well.
But these are just my two cents.