Require authorization to access ec2 port - amazon-web-services

Not sure what the right terms were to start this question but basically I have a downloaded UI tool that runs on 0.0.0.0:5000 on my AWS EC2 instance and my ec2 instance has a public ip address associated with it. So right now everyone in the world can access this tool by going to {ec2_public_ip}:5000.
I want to run some kinda script or add security group inbound rules that will require authorization prior to letting someone view the page. The application running on port 5000 is a downloaded tool not my own code so it wouldnt be possible to add authentication to the tool itself (Its KafkaMagic FYI).
The one security measure I was able to do so far was only allow specific IPs TCP connection to port 5000, which is a good start but not enough as there is no guarantee someone on that IP is authorized to view the tool. Is it possible to require an IAM role to access the IP? I do have a separate api with a login endpoint that could be useful if it was possible to run a script before forwarding the request, is that a possible/viable solution? Not sure what best practice is in this case, there might be a third option I have not considered.
ADD-ON EDIT
Additionally, I am using EC2 Instance Connect and if it is possible to require an active ssh connection before accessing the ec2 instances ip that would be a good solution as well.
EDIT FOLLOWING INITIAL DISCUSSION
Another approach that would work for me is if I had a small app running on a different port that could leverage our existing UI to log a user in. If a user authenticated through this app, would it be possible to display the ui from port 5000 to them then? In this case KafkaMagic would be on a private ip and there would be a different IP that the user would go through before seeing the tool

In short, the answer is no. If you want authorization (I think, you mean, authentication) to access an application running on the server - you need tools that run on the server. If your tool offers such capability - use it. It looks like Kafka Magic has such capability: https://www.kafkamagic.com/faq/#how-to-authenticate-kafka-client-by-consumer-group-id
But you can't use external tools, like AWS, that perform such authentication. Security group is like a firewall - it either allows or blocks access to the port.

You can easily create a script that uses the aws sdk or even just executes the aws CLI to view/add/remove an ip address of a security group. How you execute that script depends on your audience and what language you use.
For a small number of trusted users you could issue them an IAM user and API key with a policy that allows them to manage a single dynamic security group. Then provide a script they can run/shortcut to click that gets the current gateway ip and adds/removes it from the security group.
If you want to allow users via website a simple script behind some existing authentication is also possible with sdk/cli approach(depending on available server side scripting).
If users have SSH access - you could authorise the ip by calling the script/cli from bashrc or some other startup script.
In any case the IAM policy that grants permissions to modify the SG should be as restrictive as possible (basically dont use any *'s in the policy). You can add additional conditions like the source IP/range (ie in your VPC) or that MFA must be active for user etc to make this more secure (can be handled in either case via script). If your running on ec2 id suggest looking at IAM Instance Roles as an easy way to give your server access to credentials for your script (but you can create a user and deploy the key/secret to the server and manage it manually if you wanted).
I would also suggest creating a dedicated security group for dynamically managed access alongside existing SGs required for internal operation for safety. It would be a good idea to implement a lambda function on a schedule to flush the dynamic SG (even if you script de-authorising an IP it might not happen so its good to clean up safely/automatically).

Related

Cut Cloud Run service from running - safety reasons

Let's assume, I run a Cloud Run service of Google.
Let's also assume someone wants to really harm you and finds out all API routes or is able to send a lot of post-requests by spamming the site.
There is a Email notification, which will popup on certain limits you set up before.
Is there also a way to automatically cut the Cloud Run service, or set it temporarily offline? I couldn't find any good resource or solution to this.
There are several solution to remove from traffic Cloud Run service, in addition of authentication solution proposed by Dondi
Delete the Cloud Run service. It might seem overkill, but, because the service is stateless, you will lost nothing (except the revision history)
If you have your Cloud Run service behind a Load Balancer
You can remove the serverless NEG that route the traffic to it
You can add a Cloud Armor policy that filter the originator IP to exclude it from the traffic
You can set the ingress to internal, or internal and cloud load balancing.
You can deploy a dummy revision (a hello world container for example), and route 100% of the traffic to it (traffic splitting feature)
You can't really "turn off" a Cloud Run service as it's fully managed by Google. A Cloud Run instance automatically scales down to zero if there are no requests, but it will continue on serving traffic.
To emulate what you want to do, make sure that your service requires authentication then revoke access on the offending user (or all users). As mentioned in the docs:
Cloud Run (fully managed) does not offer a direct way to make a service stop serving traffic, but you can achieve a similar result by revoking the permission to invoke the service to identities that are invoking the service. Notably, if your service is "public", remove allUsers from the Cloud Run Invoker role (roles/run.invoker).
Update: Access to a resource is managed through an IAM policy. In order to control access programmatically, you have to get the IAM policy first, then revoke the role to a user or a service account. Here's the documentation that gives an overview.

How to protect source code in aws from client?

I am developing some projects for one client. I put all the code in aws ec2 instance. I don't want to give the code access to client.
Client will have aws account id password.
Can I protect source code access in aws such that the client can do and see other things but not the code.
You could provide them access to the aws console but not hand them programmatic access/ssh-access to the instances.
If they want ssh access and your programms require source code I don't see any possibility to hide the code.
You could however, in your contract state that your code is your property and not allowed to be reused etc..
Reference
Securing Folder on EC2 Amazon Marketplace AMI
Edit:
I created a user with console access but without any rights. As you can see he does not see my instances running. Hence he can not copy them whatsoever. You could however grant access to certain products such as cloud watch so he can monitor the logs.

In AWS EC2, is there a way to allow access only to a designated device?

I am new to AWS EC2. I want to set up a website only for my family members.
It will contain some content that is not necessarily private, but would be more appropriate if only family members can access.
IP address discrimination wouldn't work here as we may on the go and use other wifi.
I'm considering MAC code as the screening basis.
Is such access restriction allowed in EC2? Thanks.
Restricting is using MAC won't work, the devices will reach EC2 over a public Network and the MAC changes at every hop. I assume you would be interested in setting remote vpn/L2TP vpn ? EC2 can be used as VPN server and can be allowed from certain client, if no, try to setup a log in based page and create account for your family members.
Here a free open source tool to achieve it:
https://www.digitalocean.com/community/tutorials/how-to-sync-and-share-your-files-with-seafile-on-ubuntu-18-04
Cognito is designed for such things, you can manage there user accounts. You can add Application Load Balancer before your EC2 instance, which will forward to Cognito authentication - but this is a bit expensive solution for "family usage".
If there's no very fragile data on this website, you can use just BasicAuth, which will prompt for username and password on site-entry, or you can add standard login page in your website.
Least, but not last, is Lambda with ApiGateway (free tier allows to free usage of this service for ALOT of requests) - this is more programatically solution - but - it's up to you which one to choose.

Site to Site connection between SonicWall and AWS - IAM Policy

I'm trying to set up a Site to Site connection between our on-premise server and our cloud infrastructure. In our premises we have a SonicWall firewall installed and, since SonicOS 6.5.1.0 it's now easy to put an AWS access key and AWS Secret Key and let the software configure everything via SDK.
The problem is that the tutorial on how to configure the firewall (p. 8) says:
The security policy used, either for a group to which the user belongs or attached to the user directly, must
include the following permissions:
• AmazonEC2FullAccess – For AWS Objects and AWS VPN
• CloudWatchLogsFullAccess – For AWS Logs
Since it's not ideal to give anyone the full access to Amazon EC2 do you know which features SonicWall actually needs so I can disable everything else and follow the principle of least privilege?
Without looking into the code for SonicWall itself, it is not going to be easy to know exactly which API calls it's going to make to EC2. If you are prepared to at least temporarily grant full EC2 access, you could use AWS CloudTrail to monitor exactly which API calls are being made by the IAM user associated with your on-premises server, and then update your specific policy to match those calls.
Alternatively, start with the full access IAM policy template and go through and deny any calls you think are completely unrelated to SonicWall's functionality.
If you trust SonicWall then probably the easiest thing to do is to just allow the full EC2 access it claims is required (or start there and gradually remove them until something breaks!)

Controlling EC2 and RDS access for third party

I'm setting up an EC2 instance and an Amazon RDS database to host a .NET website. I want my third-party webmaster to handle its setup, but once he has completed setup and the website is running, I want to remove his access to EC2 and RDS completely.
All I want to give him access to is RDP in EC2 with root access in case he needs to install extra software and the ability to create and edit tables within an SQL database in RDS. He does not have any role in managing and modifying EC2/RDS instances.
I've tried allotting IAM access with groups and all but I can't figure out how to myself retain superuser access while I remove him once he is done setting up the web server and SQL database. How do I give him temporary revokable access while I maintain superuser access that will not be affected even if I remove him from IAM?
Amazon IAM won't help you with what you want to do.
IAM is used when you want to restrict and/or allow access to the upper-level management of the resources through the AWS Management Console and/or the AWS API.
However, what you want to do is control access to the internals of your resources (EC2 instance(s) and RDS instance). For these, you need to do them using their own internal security controls:
For your RDS instance, create a non-admin user with just enough permissions for them to accomplish what they want to do. For example, if your RDS instance is MySQL, then give them INSERT, SELECT, UPDATE, DELETE, CREATE TABLE, etc. permissions. Do not give them the ability to create/modify users or anything administrative like that. Best practice is give them permissions for as little as possible and add permissions (if you think it's OK) as they ask for them.
For your EC2 instance(s), do not give them root access. Create a non-root user specifically for your webmaster. Give that user "just enough" permissions to install the website. Do not allow them to use yum or apt. Instead, if they need it, they should tell you and you can do it as root.
In both cases, once your webmaster is done, delete their users and close the security group(s) to them.
Never give root/admin access to a third-party. There are many reasons, but the primary ones are these:
With root access, your webmaster could create other users and/or back doors that allow them access even after you revoke their access. Don't give them the chance to do that.
Since you are responsible for these resources, you should be aware of everything that was done to them: all users that get created, all software that's installed, etc.