Mashery vs WSO2 vs 3scale [closed] - wso2

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to know the differences between Mashery, WSO2 and 3scale. Someone who has used API Managers before can give his opinion? What are advantages and disadvantages of each one
thanks
cheers

Not sure, but this question might end up flagged as off topic - vendor comparison, but anyway I'll jump in. I work at 3scale (full disclosure) but hopefully this is useful anyway - the three are pretty different. Trying to be as neutral as possible!:
3scale uses NGNIX and/or open source code plugins to enforce all of the API traffic rules and limits (rate limits, key security, oauth, analytics, switching apps on and off etc.) and the traffic always flows directly to your servers (not via the cloud) so you don't have additional latency or privacy concerns. Because it's NGNIX it's also widely supported, very fast and flexible. Then it has a SAAS backend that manages all the analytics, rate limits, policies, developer portal, alerts etc. + synchronizes across all the traffic manager nodes. It's free to use up to nearly 5million API calls per month.
WSO2's system is an additional module to the WSO2 ESB so if you're using that it makes a lot of sense. It runs everything locally with no cloud components - a pro or a con depending on how you see it. It's also been around a lot less time and doesn't have such a large userbase.
Mashery has two systems - the main one with which the API traffic flows through Mashery's cloud systems first and has traffic management applied there. So there is always a latency heavy roundtrip between the users of the API and your servers + it means Mashery is in your API traffic critical path. They also have an on premise traffic manager but it's much less widely used. Both solutions have very significant costs and long term commitments.
As 3scale what we see as the main advantage is you have a tons of control as to how you set up all the traffic flow and never have to route through a third party plus you have the benefit if having all the heavy lifting hosted and synchronized across multiple data centers. We're also committed to having a strong free for ever tier of service since we want to see a lot of APIs out there! http://www.3scale.net/
Good luck with your choice!
steve.

Related

Should I use Serverless Computing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am creating a MVP(Minimum Viable Product) that has a nodejs server using express for a rest api and a socket.io connection for chat features.
My concern is not so much about cost or scalability, but about setup time/maintenance as this is an MVP. Would serverless or not serverless take less time to setup/maintain on AWS?
Serverless is a great choice is you want to set up a simple REST API application. Using Express would also be a good choice.
API Gateway and Serverless also now supports websockets, so it should be pretty easy to create a websocket application. When it comes to socket.io, however, you will need to do a bit of research before diving in.
Websocket support on API Gateway is a relatively new concept, and there aren't too many resources online on it. The combination with Lambda can be a little difficult to grasp at first. As for socket.io there are even less.
I personally recommend running a EC2 instance running socket.io for your MVP. I think it'd be easier.
There are several reasons to choose a serverless infrastructure over non-serverless. In many cases these align very closely with 5 Pillars of the AWS Well-Architected Framework. Serverless architectures offer great:
Reliability - no need to guess capacity, can easily scale horizontally with demand
Efficiency - tremendously reduced costs for intermittent and infrequent workloads
Maintenance - nonexistent
Availability - highly available and fault tolerant
While your proposed project does appear to fit well within the FaaS framework (infrequent and unpredictable workload with low resource requirements), the disadvantages of serverless, notably the more complex and difficult to test architecture and vendor lock-in can make it challenging to rapidly prototype and deploy a MVP.
As your product favors an engineering tradeoff toward time to market, a non-serverless approach will most likely enable you to release a MVP quickly with minimal headache
There are some serverless frameworks that work on AWS lambda function. By my realworld experience, there are some notices on each of them:
AWS Amplify (https://docs.amplify.aws) a full stack serverless solution for developers. It's quite easy to use at the begining. Cons. Is that over the time your maintainance cost is higher on deployment part. It's very slow to deploy a stack on AWS when you just need to change a piece of code. It will download all the stack files to local then upload again...
Serverless framework (https://www.serverless.com) is less complex, rich plugins and nano function oriented. The downside of this framework is every code function will be used the same across lambdas. When the project is bigger, the code size is bigger therefore your lambda's cold start is slower.
Simplify framework (https://github.com/simplify-framework/codegen) sounds a lightweight but rich functionalities. It allows you build your CI/CD inside your project. This concept is similar with AWS CDK has just released a couple of days in May. You design your API using OpenAPI specs (swagger specs) that will be reused and standardized your APIs across tools and processes. No vender lock-in has been desiged carefully. Today you are in AWS but tomorrow it will be on your on-premise servers.
You choose what fits to your solution. There is no one fit all.

AWS Server Size for Hosting [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am looking into purchasing server space with AWS to host what will eventually be over 50 websites. They will have many different ranges of traffic coming in. I would like to know if anyone has a recommendation on what size of server that would be able to handle this many sites.
Also, I was wondering if it's more cost effective/efficient to host an separate EC-2 instance for each site or too purchase a large umbrella server and host all sites on a single instance?
Thanks,
Co-locating services on single/multiple servers is a core architecting decision your firm should make. It will directly impact the performance, security and cost of your systems.
The benefit of having multiple services on the same Amazon EC2 instance is that they can share resources (RAM, CPU) so if one application is busy, it has access to more total resources. This is in contrast to running each one on a separate instance, where there is a smaller, finite quantity of resources. Think of it like car-pooling vs riding motorbikes.
Sharing resources means you can probably lower costs, since you'll need less total capacity.
From a security perspective, running on separate instances is much better because they are isolated from each other. You should also investigate network isolation to prevent potential breaches between instances on the same virtual network.
You should also look at the ability to host all of these services using a multi-tenant system as opposed to 50 completely separate systems. This has further benefits in terms of sharing resources and reducing costs. For example, Salesforce.com doesn't run a separate computer for each customer -- all the customers use the same systems, but security and data is kept separate at the application layer.
Bottom line: There are some major architectural decisions to make if you wish to roll-out secure, performant systems.
The short correct answer:
If those sites are only static(html, css and js). EC2 won't be necessary because you can use S3 and it will be more cheap and you won't have to worry about scaling.
But if those sites have a dynamic part like php, python and similar. Well it is a different story.

Should I use google charts in a production environment [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Should I use google charts in production environment?
Google charts are very easy to use.
https://google-developers.appspot.com/chart/interactive/docs/quick_start
But is it recommended to be used in a production environment?
The API's are not hosted in house but called form google servers.
There is a risk of google changing them or discontinuing them.
I couldn't find any license agreement to use.
Is the data secure as the data is being sent to google servers.
Are the above real risks or am I over thinking.
I was wondering if anyone has any experience with using google API's in production. Or if anyone can give some recommendations.
The Terms of Service cover some of your questions. Basically, Google's deprecation policy says that the API will be available for 3 years following deprecation (and most of the API - namely, the Interactive Charts API - is not deprecated; the old Image Chart API is, however).
For data security, most charts in the Interactive Charts API do not send any data to Google's servers, though there are exceptions. Each chart's documentation has a Data Policy section which explains what, if any, data is sent to Google (examples: AreaCharts, which do not send any data; and GeoCharts, which may send data if you use the geocoding features). Charts in the Image Chart API do send data to Google's servers, as they generate the images server-side rather than client-side, but this API is deprecated anyway, so you probably shouldn't be using it.
The main risk with using the Visualization API in my experience is that you have (practically) no control over versioning. When the development team releases an update, everyone everywhere gets the update. Usually this is a good thing, as it brings new features, bug fixes, and performance enhancements to everyone. Occasionally, however, a new release may introduce a bug, or change the behavior or appearance of a chart in some way that is undesirable for your application. When this happens, you generally cannot roll back to the previous version. For projects that are under active development for long periods of time, this is generally an acceptable trade-off for the free (as in beer) chart API. For projects that do not have a long-term maintenance budget, this can be problematic.
If your user-base is in an area that has poor connectivity to Google's servers, having the API hosted remotely could be problematic, but in general this is not the case.
I have used it in a production environment. All the questions you have posed are very real possibilities. For use it came down to budget, the money was there to purchase a system so we went with what we could afford at the time. The direction you go really depends on budget and existing systems that might be able to achieve the same thing.

Can the way a site is coded affect how much we spend on hosting? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Our website is an eCommerce store trading in ethically sourced loose diamonds. We do not get much traffic and yet our Amazon bill is huge ($300/month for 1,500 unique visits). Is this normal?
I do know we are daily doing some database pulling twice from another source and that the files are large. Does it make sense to just use regular hosting for this process and then the Amazon one just for our site?
Most of the cost is for Amazon Elastic Compute Cloud. About 20% is for RDS service.
I am wondering if:
(a) our developers have done something which leads to this kind of usage OR
(b) Amazon is just really expensive
IS THERE A PAID FOR SERVICE WHICH WE CAN USE TO ENSURE OUR SITE IS OPTIMISED FOR ITS HOSTING - in terms of cost, usage and speed?
It should probably cost you around 30-50 dollars a month. 300 seems higher than necessary.
for 1500 vistors, you can get away with using an m1.small instance most likely
I'd say check out the AWS trusted advisor service that will tell you about your utilization and where you can optimize your usage, but you can only get that with AWS Business support (100/month). However considering your way over what is expected, it might be worth looking into
Trusted advisor will inform you of quite a few things:
cost optimization
security
fault tolerance
performance
I've generally found it to be one of the most useful additions to my AWS infrastructure.
Additionally if you were to sign up for Business support, not only do you get trusted advisor, but you can ask questions directly to the support staff via chat, email, or phone. Would also be quite useful to help you pinpoint your problem areas.

Messaging in a micro-service architecture [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm beginning to investigate service-oriented architectures and wonder how best to structure the messaging between processes. It seems that direct HTTP calls between services and/or a pubsub bus are two common approaches. In what sorts of situations is one more favorable than the other? I can see how pubsub would lead to more decoupled services but I also get the impression that it becomes much harder to track a message's path though the system.
What are some resources for learning more about this? I'm particularly curious about this in the context of very small, "hand-rolled" services (i.e. Ruby/Sinatra, Node/Express, Redis pubsub, etc.) as opposed to any of the prescribed SOA stacks/suites out there...though I'm sure the same principles apply.
Thanks!
I'll give you my two cents.
but I also get the impression that it becomes much harder to track a message's path though the system.
You're right that pubsub SOA architectures AKA (SOA 2.0) providing a great deal of decoupling, but you also pay a price, because this is exactly what happens, although tools like splunk can help a lot.
seems that direct HTTP calls between services and/or a pubsub bus are two common approaches
Actually if you look at the most used .net event soa frameworks (NServiceBus, Mule and MassTransit) they don't use http calls, but yes you can implement a microservices architecture and use http as the communication protocol.
I understand you want to start applying some of the best enterprise architecture concepts but I would say that you better off starting with simpler, yet stronger foundations. There is no point in you jumping to event soa, without knowing if you really need it. If I was starting a new system and wanted to make sure that I was properly adapting DDD and SOA principles, I would start by identifying the services for my domain. So say you have 3 services, you could start by declaring the public contracts for each of those services, you don't need anything special, you can start with WCF/ASP.NET Web API with a sync REST API. You would then make sure that each service would get its own database, because you're aiming for low coupling, and you could then create an API layer (the one visible to the outside world) again using WCF/ASP.NET Web API, because your microservices should not be exposed directly to the outside world. So at this point you would have a SOA like, yet simple in design, architecture, but because you would have well defined contracts, you could well extend your services by adding async capabilities to them and for that you would start by adding a message queue to each of the services. You know, you don't need to start with a complex system, start with something basic, well defined, which allows you to scale if you have to.
The system I described could be extended to support events easily if you wanted to, and the fact that you would at this point have already sync messages would not stop you from adding asyn messages to the system as well.
But these are just my two cents.