Unable to invoke AWS API Gateway GET URL with GPRS connection - amazon-web-services

I'm attempting to call a deployed API through GPRS AT-Commands. I am able to make HTTPS calls, for instance doing a GET on https://www.amazon.jobs/ gives me a 200 and a large response. However I've tried doing something similar on my deployed API but end up receiving a 601 error which is simply just a "Network Error" for the GPRS.
The API works through my browser or even a Python one-liner in the command prompt. I figure it has maybe something to do with certificates or headers or many other things but I'm not sure. What is the difference between a GET to API Gateway and say for example, a GET to other Amazon URLs (like amazon.jobs)? Would a better idea be to create an intermediary endpoint that could construct a successful call to API Gateway?

Recapping the discussion from the comments...
API Gateway requires a https connection with a client that support server name indicator (SNI). SNI is an extension to TLS and it sounds like the SIM900 GPRS module probably doesn't support it.
There's current no great option for using API Gateway without SNI. You can put a CloudFront distribution in front of your API and enable CloudFront's support for dedicated IPs which removes the need for SNI. That's a rather expensive option at $600 per month. It would be cheaper to set-up multiple EC2 instances behind an ELB.

Related

Do I need a WAF (Web Application Firewall) to protect my app?

I have created a micro-service app relying on simple functions as a service. Since this app is API based, I distribute tokens in exchange for some personal login info (Oauth or login/password).
Just to be clear, developers will then access my app using something like: https://example.com/api/get_ressource?token=personal-token-should-go-here
However, my server and application logic still gets hit even if the token is not provided, meaning anonymous attackers could flood my services without login, taking my service down.
I came across WAF recently and they promise to act as a middle-man, filtering abusive attacks. My understanding is that a WAF is just reverse-proxying my API and applies some known attacks patterns filters before delegating a request to my actual backend.
What I don't really get is: what if an attacker has direct access to my backend's IP?! Wouldn't he be able to bypass the WAF and DDoS my backend directly? Does WAF protection only relies on my original IP not being leaked?
Finally, I have read that WAF only makes sense if it is able to mitigate DDoS through a CDN in order to spread Layer 7 DDoS attacks across multiple servers and bandwidth if needed. Is it any true ? or can I just implement WAF myself ?
Go with cloud, you can deploy your app to AWS, there are 2 plus points of this.
1. Your prod server will be behind private IP not public IP.
2. AWS WAF is budgeted service, and good for block dos,scanner, and flood attacks.
You can also use captcha on failed attempts to block IP.

AWS API Gateway integration with Socket.io

I want to map an API Gateway endpoint with a Socket.io server endpoint, in order to authenticate users through Cognito and, if successful, redirect to the Socket.io server and establish a socket with optional namespace and rooms.
Is that makes sense? I didn't found any example, and API Gateway has only recently enabled a WebSocket API but without support for Socket.io
Your question has two parts:
First, the API Gateway using Cognito to authenticate your client;
Second, assuming you are using an EC2 running Node.JS with Socket.IO using API Gateway as an endpoint for your clients.
For the First part, you may use the following reference from AWS documentation.
There are several sub-parts when you talk about AWS Cognito, for example including AIM permissions Method Execution to enable API resource endpoint HTTP method.
For the second point, enable API Gateway to establish a synchronous connection with EC2 port running Socket.io you may read some references like this one.
You should configure your API Gateway:
Protocol WebSocket connection
Select your Route Selection expression ,e.g. \$default
Map the target backend for each $connect, $disconnect and $default
Use integration type AWS Service
Select EC2 and fill the rest of configs.
The answer by Rafael focuses more on using the Websocket API Gateway which in my opinion is still relatively new and there is some space to improvements. Plus I don't like having lambda integrations with database access because without RDS proxy they exceed the db connections really fast, and I don't think HTTP integration adds anything to the whole thing because you're performing HTTP request in the end but it's called through the Websocket API.
One thing I agree on with Rafael is that you need to have an EC2 instance running socket.io whether it's in Node.js or python (I used python with Flask in my case).
I managed to connect to my socket.io by using the HTTP API Gateway and setting allow_upgrades=False so http protocol won't be upgraded to ws protocol, because HTTP API Gateway doesn't support ws. My HTTP API Gateway is just forwarding socket.io requests to the load balancer, and good thing about that is that you can define access control on each route defined in the HTTP API Gateway.
The socket.io on my EC2 instance is defined like this:
socketio = SocketIO(async_handlers=True, allow_upgrades=False, cors_allowed_origins='*')
And my client connects to it by simply calling the route defined in the HTTP API Gateway which has proxy integration enabled.
https://xxxxxxxxx.execute-api.us-west-2.amazonaws.com/socket.io/{proxy}
Final result - client connected to socket
Before websocket technology, if you wanted real-time data in your browser, you needed a wasteful polling strategy. That's why websocket technology was introduced. However, it took some time before browsers supported it. On top of that, it wasn't that good at handling reconnects.
Socket-io gave us early-access to a reliable solution by combining multiple protocols, and adding several features to improve the stability and to recover from errors. With new releases, the protocol changed, and more flags and options were added.
That evolution made socket-io what it is today, which isn't exactly an "open standard". For that reason, it will probably never be decently supported on AWS.
Some possible solutions:
Having said that, browsers have evolved and most of them support websockets now. So, you could consider to migrate (back) from socket-io to plain old websockets. Nevertheless, you probably want to add a "heartbeat" that sends back and forth ping/pong messages to detect disconnects (which is one of those things that socket-io has built-in).
However, if you like GraphQL, then you should certainly consider AWS AppSync, which amongst others supports GraphQL subscriptions to push notifications to the client. Apollo client is extremely popular and reliable.

Is any aws service suitable for sending real time updates to browser?

I'm developing a stocks app and have to keep users browser updated with pricing changes
I don't need to access past data, browser just have to get current data whenever it changes
is it possible to filter a dynamodb stream and expose an endpoint (behind api gateway) that could be used with a javascript EventSource?
I realize this is not using Server Sent Events but AWS just announced Serverless WebSockets for API Gateway. Pricing is based on minutes connected and number of messages sent.
Product Launch Article: https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-api-gateway-launches-support-for-websocket-apis/
Documentation: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html
Pricing: https://aws.amazon.com/api-gateway/pricing/
API Gateway is a store-and-forward service. It collects the response from whatever the back-end may happen to be (Lambda, an HTTP server, etc.) and then returns it en block to the browser -- it doesn't stream the response, so it would not be suited for use as an Eventsource.
AWS doesn't currently have a managed service offering that is obviously suited to this use case... you'd need a server (or more than one) on EC2, consuming the data stream and relaying it back to the connected browsers.
Assuming that running EC2 servers is an acceptable option, you then need HTTPS and load balancing. Application Load Balancer supports web sockets, so it also might also support an eventsource. A Classic ELB in TCP (not HTTP) mode should support an eventsource without a problem, though it might not correctly signal to the back-end when the browser connection is lost. Both of those balancers can also offload HTTPS for you. Network Load Balancer would definitely work for balancing an eventsource, but your instances would need to provide the HTTPS, since NLB doesn't offload it for you.
A somewhat unorthodox alternative might actually be AWS IoT, which has built-in websocket support... Not the same as eventsource, of course, but a streaming connection nonetheless... in such an environment, I suppose each browser user could be an addressable "thing."

Amazon API Gateway in front of ELB and ECS Cluster

I'm trying to put an Amazon API Gateway in front of an Application Load Balancer, which balances traffic to my ECS Cluster, where all my microservices are deployed. The motivation to use the API Gateway is to use a custom authorizer through a lambda function.
System diagram
In Amazon words (https://aws.amazon.com/api-gateway/faqs/): "Proxy requests to backend operations also need to be publicly accessible on the Internet". This forces me to make the ELB public (internet-facing) instead of internal. Then, I need a way to ensure that only the API Gateway is able to access the ELB outside the VPC.
My first idea was to use a Client Certificate in the API Gatway, but the ELB doesn't seem to support it.
Any ideas would be highly appreciated!
This seems to be a huge missing piece for the API gateway technology, given the way it's pushed. Not being able to call into an internal-facing server in the VPC severely restricts its usefulness as an authentication front-door for internet access.
FWIW, in Azure, API Management supports this out of the box - it can accept requests from the internet and call directly into your virtual network which is otherwise firewalled off.
The only way this seems to be possible under AWS is using Lambdas, which adds a significant layer of complexity, esp. if you need to support various binary protocols.
Looks like this support has now been added. Haven't tested, YMMV:
https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-api-gateway-supports-endpoint-integrations-with-private-vpcs/
We decided to use a header to check to make sure all traffic is coming through API Gateway. We save a secret in our apps environmental variables and tell the API Gateway to inject that when we create the API. Then check for that key in our app.
Here is what we are doing for this:
In our base controller we check for the key (we just have an REST API behind the gateway):
string ApiGatewayPassthroughHeader = context.HttpContext.Request.Headers["ApiGatewayPassthroughHeader"];
if (ApiGatewayPassthroughHeader != Environment.GetEnvironmentVariable("ApiGatewayPassthroughHeader"))
{
throw new error;
}
In our swagger file (we are using swagger.json as the source of our APIs)
"x-amazon-apigateway-integration": {
"type": "http_proxy",
"uri": "https://${stageVariables.url}/path/to/resource",
"httpMethod": "post",
"requestParameters": {
"integration.request.header.ApiGatewayPassthroughHeader": "${ApiGatewayPassthroughHeader}"
}
},
In our docker compose file (we are using docker, but the same could be used in any settings file)
services:
example:
environment:
- ApiGatewayPassthroughHeader=9708cc2d-2d42-example-8526-4586b1bcc74d
At build time we take the secret from our settings file and replace it in the swagger.json file. This way we can rotate the key in our settings file and API gateway will update to use the key the app is looking for.
I know this is an old issue, but I think they may have just recently added support.
"Amazon API Gateway announced the general availability of HTTP APIs, enabling customers to easily build high performance RESTful APIs that offer up to 71% cost savings and 60% latency reduction compared to REST APIs available from API Gateway. As part of this launch, customers will be able to take advantage of several new features including the ability the route requests to private AWS Elastic Load Balancers (ELB), including new support for AWS ALB, and IP-based services registered in AWS CloudMap. "
https://aws.amazon.com/about-aws/whats-new/2020/03/api-gateway-private-integrations-aws-elb-cloudmap-http-apis-release/
It is possible if you use VPC Link and Network Load Balancer.
Please have a look at this post:
https://adrianhesketh.com/2017/12/15/aws-api-gateway-to-ecs-via-vpc-link/
TL;DR
Create internal Network Load Balancer connected to your target group
(instances in a VPC)
In the API Gateway console, create a VPC Link and link it to above NLB
Create API Gateway endpoint, choose "VPC Link integration" and specify your NLB internal URL as an "Endpoint URL"
Hope that helps!
It is now possible to add an authorizer directly to Application Load Balancer (ALB) in front of ECS.
This can be configured directly in the rules of a listener. See this blog post for details:
https://aws.amazon.com/de/blogs/aws/built-in-authentication-in-alb/
Currently there is no way to put API Gateway in front of private ELB, so you're right that it has to be internet facing. The best workaround for your case I can think of would be to put ELB into TCP pass through mode and terminate client certificate on your end hosts behind the ELB.
The ALB should be internal in order to have the requests routed there through private link. Works perfectly fine in my setup without need to put NLB in front of it.
Routes should be as following:
$default
/
GET (or POST or whichever you want to use)
Integration should be attached to all paths $default and GET/POST/ANY etc

Does Kinesis support HTTP (not HTTPS)?

Now I try out Kinesis REST API with HTTPS and it's work fine. But I want to build it with only HTTP, not HTTPS. Does Kinesis support HTTP without SSL?
No, it doesn't. According to the Regions and Endpoints documentation the Kinesis endpoints only support HTTPS.
http://docs.aws.amazon.com/general/latest/gr/rande.html#ak_region
If you are in a situation where you need to communicate with an API that only supports HTTPS but you are, for some significant reason, constrained to HTTP only, you might find that you could use a proxy that can accept unencrypted connections and originate encrypted connections to the final endpoint. On some of my legacy systems, I have accomplished this with HAProxy 1.5 or higher (previous versions do not have built-in openssl integration)... or Stunnel4, which I used before HAProxy 1.5 was released. Apparently there is now an Stunnel "5."
Of course, this is only viable if the network between the legacy system and your SSL client offloading is trusted.