How to restrict access to EC2 Spring Boot based REST endpoints - amazon-web-services

I have static web hosted on AWS S3 and Spring Boot based REST application running on EC2.
How would I restrict access to EC2? Currently, I have opened access to world on 8080 port, but this is not what i would like to have when I migrate to production. If I do not open access to world, I have connection timeout error in the browser's console.
Is there some way to allow only S3 based bucket to see EC2 instance and revoke world access?

But the code isn't running on the "S3 based bucket", it is running in each of your user's browsers. So you have to allow access to the world since you don't know ahead of time each IP address of each user you will ever have.
You should look into something like JSON Web Tokens or AWS Cognito or at least use API Keys.

Related

Programmatically authenticating and accessing service inside AWS EKS cluster from outside when ALB is used

We build a Kubernetes application that is deployed by our users, our users connect to the deployed API server using a client and then use that client to submit jobs.
This question is about programmatically connecting to an application running inside Kubernetes cluster from outside of the cluster.
We have this working with local Kubernetes deployments and Google Kubernetes Engine (using IAP).
However some of our users on Amazon cloud cannot connect to the application.
I do not have much experience with AWS. I'm used to token-based auth and OAuth-like auth methods where authentication happens outside of a library: the user is redirected to some page where they log into a service and the client library only gets a token without ever seeing the password.
One of our users have implemented an auth solution that takes username and password and then uses Selenium to emulate the login process and get a cookie which is then used for sending requests. https://github.com/kubeflow/pipelines/pull/4182
Here is a quote from the PR.
Currently, kfp client can not be used outside AWS EKS cluster. Application load balancer manages outside traffic and require authentication before traffic coming into mesh. This PR automates ALB authentication and get session cookie to authenticate KFP python client to Kubeflow cluster.
This unblocks user to submit pipeline/run outside kubeflow cluster and user can integrate with their CI/CD solutions much easier.
Cognito or OIDC behind ALB both can leverage this solution.
Is there a better way to authenticate with AWS EKS ALB?
I've searched the AWS documentation for programmatic authentication methods, but did not find what I wanted (the docs mostly focused on server-side auth setup). In my last search I found the following article, but I'm not 100% sure it covers what our AWS users want.

AWS restrict access to Limited Devices

I have hosted a web application (node.js + angular.js) on AWS ECS service. My requirement is to restrict access to this application to only certain devices and not all device in my organisation.
I am not sure if we can restrict the devices using devices private IP address of individual machine considering they are dynamic in nature.
I cannot use client certification based on this aws article.
Can somebody suggeset any other way or can redirect me in right direction.

HashiCorp Vault - Setup / Architecture in Production

I'm getting ready to setup HashiCorp Vault with my web application, and while the examples HashiCorp provides make sense, I'm a little unclear of what the intended production setup should be.
In my case, I have:
a small handful of AWS EC2 instances serving my web application
a couple EC2 instances serving Jenkins for continuous deployment
and I need:
My configuration software (Ansible) and Jenkins to be able to read secrets during deployment
to allow employees in the company to read secrets as needed, and potentially, generate temporary ones for certain types of access.
I'll probably be using S3 as a storage backend for Vault.
The types of questions I have are:
Should vault be running on all my EC2 instances, and listening at 127.0.0.1:8200?
Or do I create an instance (maybe 2 for availability) that just run Vault and have other instances / services connect to those as needed for secret access?
If i needed employees to be able to access secrets from their local machines, how does that work? Do they setup vault locally against the S3 storage, or should they be hitting the REST API of the remote servers from step 2 to access their secrets?
And to be clear, any machine that's running vault, if it's restarted, then vault would need to be unsealed again, which seems to be a manual process involving x number of key holders?
Vault runs in a client-server architecture, so you should have a dedicated cluster of Vault servers (usually 3 is suitable for small-medium installations) running in availability mode.
The Vault servers should probably bind to the internal private IP, not 127.0.0.1, since they they won't be accessible within your VPC. You definitely do not want to bind 0.0.0.0, since that could make Vault publicly accessible if your instance has a public IP.
You'll want to bind to the IP that is advertised on the certificate, whether that's the IP or the DNS name. You should only communicate with Vault over TLS in a production-grade infrastructure.
Any and all requests go through those Vault servers. If other users need to communicate with Vault, they should connect to the VPC via a VPN or bastion host and issue requests against it.
When a machine that is running Vault is restarted, Vault does need to be unsealed. This is why you should run Vault in HA mode, so another server can accept requests. You can setup monitoring and alerting to notify you when a server needs to be unsealed (Vault returns a special status code).
You can also read the production hardening guide for more tips.
Specifically for point 3 & 4:
They would talk to the Vault API which is running on one/more machines in your cluster (If you do run it in HA mode with multiple machines, only one node will be active at anytime). You would provision them with some kind of authentication, using one of the available authentication backends like LDAP.
Yes, by default and in it's recommended configuration if any of your Vault nodes in your cluster get restarted or stopped, you will need to unseal it with however many keys are required; depending on how many key shards were generated when you stood up Vault.

We are not using ADFS when we log onto AWS servers

I am going over IAM topic. Understood about Active Directory Federation service (ADFS). We just started on a project. We are going to host a vendor product that we use here on premise onto AWS. I RDP (remote into) into AWS 2012 servers from my office network. When I log onto AWS windows 2012 servers, I see my credentials already on AWS servers. I am pretty sure we are not using ADFS to authenticate users. What else could we be using when we RDP onto AWS servers. I can see my on premise file servers when I log onto AWS servers. Is it possible that when our cloud platform engineers have setup AWS servers, they configured in such a way we can see our on prem servers?
I'm assuming you used your domain credentials to RDP to the servers in AWS... The AWS servers would need to be joined to the domain and there would need to be a route from the VPC in AWS back to your on-prem infrastructure in order for either of those things to work, so it sounds like it was configured prior to you logging onto it.

How to make Web Services public

i created an android application that requires use of web service
i want it to be able to access the app everywhere therefore i need
my web services to be public with an external ip so i can access
what is the best way to do it?
I have an Amazon Web Services account i dont know if created an instance and run the web services there will be the best solution
My big problem with Amazon instance is that it takes a while to show in the app the result of the web service
Any ideas in how to make my web service public?
It appears that your requirement is:
Expose a public API endpoint for use by your Android application
Run some code when the API is called
There are two ways you could expose an API:
Use Amazon API Gateway, which that can publish, maintain, monitor, and secure APIs. It takes care of security and throttling. A DNS name is provided, which should be used for API calls. When a request is receive, API Gateway can pass the request to a web server or can trigger an AWS Lambda function to execute code without requiring a server.
Or, run an Amazon EC2 instance with your application. Assign an Elastic IP Address to the instance, which is a static IP address. Create an A record in Amazon Route 53 (or your own DNS server) that points a DNS name to that IP address.