Does Kinesis support HTTP (not HTTPS)? - amazon-web-services

Now I try out Kinesis REST API with HTTPS and it's work fine. But I want to build it with only HTTP, not HTTPS. Does Kinesis support HTTP without SSL?

No, it doesn't. According to the Regions and Endpoints documentation the Kinesis endpoints only support HTTPS.
http://docs.aws.amazon.com/general/latest/gr/rande.html#ak_region
If you are in a situation where you need to communicate with an API that only supports HTTPS but you are, for some significant reason, constrained to HTTP only, you might find that you could use a proxy that can accept unencrypted connections and originate encrypted connections to the final endpoint. On some of my legacy systems, I have accomplished this with HAProxy 1.5 or higher (previous versions do not have built-in openssl integration)... or Stunnel4, which I used before HAProxy 1.5 was released. Apparently there is now an Stunnel "5."
Of course, this is only viable if the network between the legacy system and your SSL client offloading is trusted.

Related

Enforce AES-256 to AWS elastic beanstalk

Disclaimer: I am fully aware that AES-128 is considered secure but we have wierd governmental requirements.
We run a server that provides a websocket interface with our clients as an elastic beanstalk application on AWS. It has an application load balancer in front of it which handles the HTTPS termination. We have a strange requirement on our system where all channels need to have > 200 bits encryption.
When our clients (which are IoT devices) establishes the connection the agreed on encryption becomes AES-128 (because all security policies in AWS accepts AES-128 and the devices do to).
The only way to, on the server-side, enforce AES-256 is to use the classic load balancer and add the ciphers ourselves. However, the classic load balancer does not support websocket.
Is there any possible way of circumventing this? Or do we need to add our own encryption to our channel to fulfill the requirements.
I believe that the best you could do with an Application Load Balancer (ALB) is to configure it to use the FIPS ELBSecurityPolicy-FS-1-2-Res-2020-10 security policy, however it will still be possible to negotiate ECDHE-ECDSA-AES128-GCM-SHA256 and ECDHE-RSA-AES128-GCM-SHA256. based on the table in the docs which will allow AES-128 as encryption method.
Another option would be to put an WebSocket API Gateway but the ciphers are pretty much the same and you might need to deal with the throttling in that case which is probably not the best thing to do considering the IoT clients.
Putting CloudFront in front of the ALB is not going to cut it either, as it has the same approach and the ciphers in the security policies for it are essentially the same
The security policy of the Network Load Balancer (NLB) is actually the same as the one of the ALB.
Essentially all possible AWS services are relying on the same security policies.
Which leads us to the two final options:
trying somehow to force it on client ends, which is most likely not possible
or replacing the ALB with a Network Load Balancer (which supports WebSockets) as suggested by #Mark B, setting up TCP listeners on it and handling the SSL yourself server side in your EB application which varies based on your application platform, but you should be able to enforce stricter (AES256) ciphers.

Google Cloud Run - Does it support non http protocol for inbound request or outbound requests?

I have an existing application that I was trying to deploy it to google cloud run, but the application can't connect to an external NATS services. It seems that cloud run only supports http, websockets, and gRPC for outbound connections and traffic. It doesn't seem that an application can make a pure TCP/IP connection with binary data from a cloud run application.
Just looking for someone to confirm this. I have not been able to find any documentations that explicitly states the inbound and outbound limitations of cloud run.
Note: my app can connect to REDIS though.
Thank you for your help.
There's no outbound protocol limitation.
In terms of inbound, Cloud Run only supports HTTP/1 and HTTP/2 (which includes gRPC) over TLS (HTTPS). That said, server-streaming is not supported (e.g. websockets or streaming gRPC calls).
Therefore, you cannot serve with arbitrary TCP wire protocols on Cloud Run.
Yes according with this document, Cloud Run just support HTTP, WebSockets and gRPC by the moment, also here is pointed the http requests not supported, I suggest you create a feature request, if the petition earns enough people popularity this feature can be added to Cloud Run

AWS API Gateway integration with Socket.io

I want to map an API Gateway endpoint with a Socket.io server endpoint, in order to authenticate users through Cognito and, if successful, redirect to the Socket.io server and establish a socket with optional namespace and rooms.
Is that makes sense? I didn't found any example, and API Gateway has only recently enabled a WebSocket API but without support for Socket.io
Your question has two parts:
First, the API Gateway using Cognito to authenticate your client;
Second, assuming you are using an EC2 running Node.JS with Socket.IO using API Gateway as an endpoint for your clients.
For the First part, you may use the following reference from AWS documentation.
There are several sub-parts when you talk about AWS Cognito, for example including AIM permissions Method Execution to enable API resource endpoint HTTP method.
For the second point, enable API Gateway to establish a synchronous connection with EC2 port running Socket.io you may read some references like this one.
You should configure your API Gateway:
Protocol WebSocket connection
Select your Route Selection expression ,e.g. \$default
Map the target backend for each $connect, $disconnect and $default
Use integration type AWS Service
Select EC2 and fill the rest of configs.
The answer by Rafael focuses more on using the Websocket API Gateway which in my opinion is still relatively new and there is some space to improvements. Plus I don't like having lambda integrations with database access because without RDS proxy they exceed the db connections really fast, and I don't think HTTP integration adds anything to the whole thing because you're performing HTTP request in the end but it's called through the Websocket API.
One thing I agree on with Rafael is that you need to have an EC2 instance running socket.io whether it's in Node.js or python (I used python with Flask in my case).
I managed to connect to my socket.io by using the HTTP API Gateway and setting allow_upgrades=False so http protocol won't be upgraded to ws protocol, because HTTP API Gateway doesn't support ws. My HTTP API Gateway is just forwarding socket.io requests to the load balancer, and good thing about that is that you can define access control on each route defined in the HTTP API Gateway.
The socket.io on my EC2 instance is defined like this:
socketio = SocketIO(async_handlers=True, allow_upgrades=False, cors_allowed_origins='*')
And my client connects to it by simply calling the route defined in the HTTP API Gateway which has proxy integration enabled.
https://xxxxxxxxx.execute-api.us-west-2.amazonaws.com/socket.io/{proxy}
Final result - client connected to socket
Before websocket technology, if you wanted real-time data in your browser, you needed a wasteful polling strategy. That's why websocket technology was introduced. However, it took some time before browsers supported it. On top of that, it wasn't that good at handling reconnects.
Socket-io gave us early-access to a reliable solution by combining multiple protocols, and adding several features to improve the stability and to recover from errors. With new releases, the protocol changed, and more flags and options were added.
That evolution made socket-io what it is today, which isn't exactly an "open standard". For that reason, it will probably never be decently supported on AWS.
Some possible solutions:
Having said that, browsers have evolved and most of them support websockets now. So, you could consider to migrate (back) from socket-io to plain old websockets. Nevertheless, you probably want to add a "heartbeat" that sends back and forth ping/pong messages to detect disconnects (which is one of those things that socket-io has built-in).
However, if you like GraphQL, then you should certainly consider AWS AppSync, which amongst others supports GraphQL subscriptions to push notifications to the client. Apollo client is extremely popular and reliable.

Deploying an HTTP/2 web-server with Kubernetes on AWS

I have a Go server that is currently running with Kubernetes on AWS. The website sits under a route-53 and an ELB that manages the SSL termination.
Now, I want to support HTTP/2 in my web-server in order to push resources to the clients, and I saw that HTTP/2 requires that the web-server will use HTTPS. I have a few questions according to that.
HTTP/2 requires HTTPS - In my case the HTTPS logic is in the ELB and it manages for me the SSL termination. My application gets the decrypted data as a simple HTTP request. Do I need to remove the ELB in order to enable HTTP/2 in my web-server?
Is there any way to leave the ELB there and enable HTTP/2 in my web-server?
In my local development I use openssl to generate certificate. If I deploy the web-server I need to get the CA certificate from AWS and store it somewhere in the Kubernetes certificate-manager and inject to my web-server in the initialization. What is the recommended way to do this?
I feel like I miss something, so I'll appreciate any help. Thanks
The new ELB supports HTTP/2 (https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/) but not the Push attribute (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration): “You can't use the server-push feature of HTTP/2”
If you want to use Push you can use the ELB as a level four TCP LoadBalancer and enable this at your webserver. For HaProxy it is also possible to still offset SSL/TLS with this set up (HTTP/2 behind reverse proxy) but not sure if similar is possible under ELB (probably not). This is because while HTTP/2 requires HTTPS from all the major browsers it is not a requirement of the protocol itself so load balancer -> server can be over HTTP/2 without HTTPS (called h2c).
However I would say that HTTP/2 Push is very complicated to get right - read this excellent post by Jake Archibald of Google on this: https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/. It’s generally been found to benefit in a few cases and cause no change in most and even cause degradation in performance in others. Ultimately it’s a bit of a let down in HTTP/2 features, though personally I don’t think it’s been explored enough so may be some positives to come out of it yet.
So if you don’t want Push then is there still a point in upgrading to HTTP/2 on the front end? Yes in my opinion as detailed in my answer here: HTTP2 with node.js behind nginx proxy. This also shows that there is no real need to have HTTP/2 on the backend from LB to webserver meaning you could leave it as a HTTPS offloading loaf balancer.
It should be noted that there are some use cases where HTTP/2 is slower:
Under heavy packet loss (i.e. a very bad Internet connection). Here the single TCP connection used by HTTP/2 and it’s TCP Head of Line Blocking means the connection suffers more than 6 individual HTTP/1 connections. QUIC which is a even newer protocol then HTTP/2 (so new it’s not even out yet, so not really available except on Google servers) addresses this.
For large packets due to AWS’s specific implementation. Interesting post here on that: https://medium.com/#ptforsberg/on-the-aws-application-load-balancer-http-2-support-fad4bc67b21a. This is only really an issue for truely large downloads most likely for APIs and shouldn’t be an issue for most websites (and if it is then you should optimise your website cause HTTP/2 won’t be able to help much anyway!). Could be easily fixed by upgrading the HTTP/2 window size setting but looks like ELB does not allow you to set this.
There is no benefit to deploying HTTP2 on an AWS load balancer if your backend is not HTTP2 also. Technically HTTP2 does not require HTTPS, but nobody implements HTTP2 for HTTP. HTTP2 is a protocol optimization (simple viewpoint) that removes round trips in the SSL negotiation, improves pipelining, etc. If the load balancer is communicating with your backend via HTTP, there will not be any improvement. The load balancer will see a small decrease in load due to reduced round trips during HTTPS setup.
I recommend that you configure your backend services to only use HTTPS (redirect clients to HTTPS) and use an SSL certificate. Then configure HTTP2, which is not easy by the way. You can use Let's Encrypt for SSL which works very well. You can also use OpenSSL self-signed certificates (which I don't recommend). You cannot use AWS services to create SSL certificates for your backend services, only for AWS managed services (CloudFront, ALB, etc.).
You can also setup the load balancer with Layer 4 (TCP) listeners. This is what I do when I setup HTTP2 on my backend servers. Now the entire path from client to backend is using HTTP2 without double SSL encryption / decryption layers.
One of the nice features of load balancers is called "SSL offloading". This means that you enable SSL on the load balancer and only enable HTTP on your backend web servers. This goes against HTTP2. Therefore think thru what you really want to accomplish and then design your services to meet those objectives.
Another point to consider. Since you are looking into HTTP2, at the same time remove support in your services for the older TLS versions and unsafe encryption and hashing algorithms. Dropping TLS 1.0 should be mandatory today and I recommend dropping TLS 1.1 also. Unless you really need to support ancient browsers or custom low-end hardware, TLS 1.2 should be the standard today. Your logfiles can tell you if clients are connecting via older protocols.

Is any aws service suitable for sending real time updates to browser?

I'm developing a stocks app and have to keep users browser updated with pricing changes
I don't need to access past data, browser just have to get current data whenever it changes
is it possible to filter a dynamodb stream and expose an endpoint (behind api gateway) that could be used with a javascript EventSource?
I realize this is not using Server Sent Events but AWS just announced Serverless WebSockets for API Gateway. Pricing is based on minutes connected and number of messages sent.
Product Launch Article: https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-api-gateway-launches-support-for-websocket-apis/
Documentation: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html
Pricing: https://aws.amazon.com/api-gateway/pricing/
API Gateway is a store-and-forward service. It collects the response from whatever the back-end may happen to be (Lambda, an HTTP server, etc.) and then returns it en block to the browser -- it doesn't stream the response, so it would not be suited for use as an Eventsource.
AWS doesn't currently have a managed service offering that is obviously suited to this use case... you'd need a server (or more than one) on EC2, consuming the data stream and relaying it back to the connected browsers.
Assuming that running EC2 servers is an acceptable option, you then need HTTPS and load balancing. Application Load Balancer supports web sockets, so it also might also support an eventsource. A Classic ELB in TCP (not HTTP) mode should support an eventsource without a problem, though it might not correctly signal to the back-end when the browser connection is lost. Both of those balancers can also offload HTTPS for you. Network Load Balancer would definitely work for balancing an eventsource, but your instances would need to provide the HTTPS, since NLB doesn't offload it for you.
A somewhat unorthodox alternative might actually be AWS IoT, which has built-in websocket support... Not the same as eventsource, of course, but a streaming connection nonetheless... in such an environment, I suppose each browser user could be an addressable "thing."