I want to require connections to my RDS instance to use TLS/SSL, and to authenticate using a client certificate attested to by a CA I control. I understand I can do the former by modifying my instance's parameter group and setting rds.force_ssl=1. As for the latter, I believe I need to update the CA cert used by my database. I see that there is a parameter, ssl_ca_file with the value /rdsdbdata/rds-metadata/ca-cert.pem. However, I don't understand how to access that file or modify the parameter.
Unfortunately I haven't found anything in the AWS docs on this topic. Has anyone successfully done something like this?
RDS is a managed service. You can't modify the ca-cert.pem file.
and to authenticate using a client certificate attested to by a CA I control
That is not something RDS supports at this time.
Related
Not sure what the right terms were to start this question but basically I have a downloaded UI tool that runs on 0.0.0.0:5000 on my AWS EC2 instance and my ec2 instance has a public ip address associated with it. So right now everyone in the world can access this tool by going to {ec2_public_ip}:5000.
I want to run some kinda script or add security group inbound rules that will require authorization prior to letting someone view the page. The application running on port 5000 is a downloaded tool not my own code so it wouldnt be possible to add authentication to the tool itself (Its KafkaMagic FYI).
The one security measure I was able to do so far was only allow specific IPs TCP connection to port 5000, which is a good start but not enough as there is no guarantee someone on that IP is authorized to view the tool. Is it possible to require an IAM role to access the IP? I do have a separate api with a login endpoint that could be useful if it was possible to run a script before forwarding the request, is that a possible/viable solution? Not sure what best practice is in this case, there might be a third option I have not considered.
ADD-ON EDIT
Additionally, I am using EC2 Instance Connect and if it is possible to require an active ssh connection before accessing the ec2 instances ip that would be a good solution as well.
EDIT FOLLOWING INITIAL DISCUSSION
Another approach that would work for me is if I had a small app running on a different port that could leverage our existing UI to log a user in. If a user authenticated through this app, would it be possible to display the ui from port 5000 to them then? In this case KafkaMagic would be on a private ip and there would be a different IP that the user would go through before seeing the tool
In short, the answer is no. If you want authorization (I think, you mean, authentication) to access an application running on the server - you need tools that run on the server. If your tool offers such capability - use it. It looks like Kafka Magic has such capability: https://www.kafkamagic.com/faq/#how-to-authenticate-kafka-client-by-consumer-group-id
But you can't use external tools, like AWS, that perform such authentication. Security group is like a firewall - it either allows or blocks access to the port.
You can easily create a script that uses the aws sdk or even just executes the aws CLI to view/add/remove an ip address of a security group. How you execute that script depends on your audience and what language you use.
For a small number of trusted users you could issue them an IAM user and API key with a policy that allows them to manage a single dynamic security group. Then provide a script they can run/shortcut to click that gets the current gateway ip and adds/removes it from the security group.
If you want to allow users via website a simple script behind some existing authentication is also possible with sdk/cli approach(depending on available server side scripting).
If users have SSH access - you could authorise the ip by calling the script/cli from bashrc or some other startup script.
In any case the IAM policy that grants permissions to modify the SG should be as restrictive as possible (basically dont use any *'s in the policy). You can add additional conditions like the source IP/range (ie in your VPC) or that MFA must be active for user etc to make this more secure (can be handled in either case via script). If your running on ec2 id suggest looking at IAM Instance Roles as an easy way to give your server access to credentials for your script (but you can create a user and deploy the key/secret to the server and manage it manually if you wanted).
I would also suggest creating a dedicated security group for dynamically managed access alongside existing SGs required for internal operation for safety. It would be a good idea to implement a lambda function on a schedule to flush the dynamic SG (even if you script de-authorising an IP it might not happen so its good to clean up safely/automatically).
i want to be able to authenticate/authorize clients to produce/consume messages on certain topics. they would be part of our vpn (incl. aws). as i understand the available documentation the only option to do this is to issue client certificates and setup ACLs based on the clients DNs? Unfortunately i was not able to use my private CA (that i've created on my linux laptop) to create client certs. so the following questions arise:
is it correct that i need to setup an AWS hosted CA (ACM PCA). that would result in almost twice the setup costs incl. the minimum broker configs.
i could proxy the outer world into the msk cluster via something like "kafka rest proxy" from confluent - correct?
am i missing something? is there an easier way built into AWS?
please enlighten me :)
thanks in advance
marcel
Yes, I believe that's correct. To do client authentication over TLS, you need to provide the ARN of your private CA that's set up with AWS PCM at the time the cluster is created - and you have to use the aws command-line tool (aws kafka create-cluster ...) to create the cluster. The UI (last time I looked) didn't have anywhere to specify that ARN.
I don't know - we bit the bullet and set up a private CA with ACM.
Nope. We're hoping that eventually AWS will integrate IAM so you can authenticate as an IAM user instead of a client certificate, but that's not where it stands today. Today, it's client certificate only for authentication.
Support for Username and Password Security looks like what you want? I think it's new..
There's AWS Cognito which you might want to try https://aws.amazon.com/cognito/
We are a small startup currently in prototype phase. We are still in development phase, and are using AWS to host our application and (test) domain. We have hosted our domain on Route 53, and registered that with SES for email services.
I am new to AWS, and have used domination to understand how to set these things up. Now it appears that our account(s) have been compromised/hacked and someone is missing it to send malicious emails. I am unsure what is the extend of hack, and if the users is only managed to get access to SES and Database credentials. I received an email from SES team, which shows emails have been send through my domain (not by me), but I never created that email on my domain.
Additionally, I have noticed that someone is trying to access my database (from China) and database is always at 100%. Database log says it has blocked IP (which is based in China).
We are using GitHub to store code, and in our code we had credentials for AWS and SMTB servers so I think its possible that someone stoke keys from there (we have taken credential out of GitHub now).
Can someone help me understand what steps do I need to take. I am thinking to shut down this environment and create a new one, but I am unsure how to get hold of my domain and shut down all emails created by spammer on my domain. I am also unclear what is the extend of hack, and if this will come back.
Cam someone please help.
You should never store your credentials in github.
In fact, you should use roles instead of credentials stored directly in the code.
So, step by step you should:
Remove the credentials from github and from your code (done)
Reset your credentials and do not store them
Create a role with the policy according to your needs
Assign that role to your resources.
Here you can found more info
I've recently been looking into AWS KMS for storing database passwords and the like. However I've also seen that secure strings in Parameter store can be used for this. In both instances I believe I would need to use the AWS CLI to access these services.
However in a production environment where there might be multiple servers, how are we supposed to go about getting the AWS CLI installed and authenticated on our instances. It feels like the CLI credentials should also be stored in Parameter store creating a bit of a catch 22. As far as I'm aware these should form part of an AMI and I don't want them in source control either.
What's the best approach here?
I just don't understand that why AWS RDS SQL SERVER does not allow any admin level rights to perform. It simply says I do not have permission. I logged in using master username and password.
EXEC sp_addmessage #msgnum = 60000, #severity = 16,
#msgtext = N'The item named %s already exists in %s.',
#lang = 'us_english';
GRANT CONTROL SERVER TO [adminUser];
I am finding pretty hard to figure out , how to deal with this.
This is forcing me to not to use AWS nemore.
RDS is a managed service provided by AWS. The whole point of RDS is that they manage the server for you. In order to ensure they are able to properly manage it, you do not have full admin rights to the server. They give you enough control that they think you require.
If you need more control, or you feel these restrictions are too limiting, then RDS may not be the service for you.
This is for anyone still looking for the solution, probably RDS parameter group will help you. you can update some of these configurations from AWS console with the RDS parameter group. See this article:
https://www.mssqltips.com/sqlservertip/5329/setting-sql-server-configuration-options-with-aws-rds-parameter-groups/