AWS MSK User/Password Authentication/Authorization - amazon-web-services

i want to be able to authenticate/authorize clients to produce/consume messages on certain topics. they would be part of our vpn (incl. aws). as i understand the available documentation the only option to do this is to issue client certificates and setup ACLs based on the clients DNs? Unfortunately i was not able to use my private CA (that i've created on my linux laptop) to create client certs. so the following questions arise:
is it correct that i need to setup an AWS hosted CA (ACM PCA). that would result in almost twice the setup costs incl. the minimum broker configs.
i could proxy the outer world into the msk cluster via something like "kafka rest proxy" from confluent - correct?
am i missing something? is there an easier way built into AWS?
please enlighten me :)
thanks in advance
marcel

Yes, I believe that's correct. To do client authentication over TLS, you need to provide the ARN of your private CA that's set up with AWS PCM at the time the cluster is created - and you have to use the aws command-line tool (aws kafka create-cluster ...) to create the cluster. The UI (last time I looked) didn't have anywhere to specify that ARN.
I don't know - we bit the bullet and set up a private CA with ACM.
Nope. We're hoping that eventually AWS will integrate IAM so you can authenticate as an IAM user instead of a client certificate, but that's not where it stands today. Today, it's client certificate only for authentication.

Support for Username and Password Security looks like what you want? I think it's new..

There's AWS Cognito which you might want to try https://aws.amazon.com/cognito/

Related

How can I edit the Amazon RDS ssl_ca_file parameter?

I want to require connections to my RDS instance to use TLS/SSL, and to authenticate using a client certificate attested to by a CA I control. I understand I can do the former by modifying my instance's parameter group and setting rds.force_ssl=1. As for the latter, I believe I need to update the CA cert used by my database. I see that there is a parameter, ssl_ca_file with the value /rdsdbdata/rds-metadata/ca-cert.pem. However, I don't understand how to access that file or modify the parameter.
Unfortunately I haven't found anything in the AWS docs on this topic. Has anyone successfully done something like this?
RDS is a managed service. You can't modify the ca-cert.pem file.
and to authenticate using a client certificate attested to by a CA I control
That is not something RDS supports at this time.

AWS Grafana connecting to AWS Opensearch `OpenSearch error: Bad Gateway`

We have an AWS Org with AWS Grafana running in the root account setup with Org access.
We have successfully connected to AWS Prometheus and other data sources across different organization accounts. But cant get AWS Grafana to connect to Amazon OpenSearch that is hosted in a VPC.
If you look at Grafana -> AWS Data Sources -> Amazon OpenSearch Service, it lists the cluster. But all attempts to connect have failed.
We have tried setting:
Using SigV4auth Auth
Using Basic auth + With Credentials (Even adding VPC connections between accounts and checking ports are open
When we try Save and Test, we always get a Testing.. followed by OpenSearch error: Bad Gateway in grafana.
Has anyone got it working successfully and able to assist?
Same issue here. Except the Grafana is setup in the same account that the opensearch cluster.
Also tried to configure the security group on the open search cluster to accept everything (all port, all protocol from anywhere).
I'm wondering if it's a network issue : the opensearch cluster being in a VPC can grafana access it ? But I can't find documentation on the network part of the managed grafana.
Hope someone will help.
Been told it’s a known issue.
The solution is to create a proxy for your opensearch cluster and let it get internet access to connect to grafana.
No idea on timelines for AWS to build / fix the problem :(
A solution that works well on my side is to fill in the fields:
HTTP part:
URL: https://search-anything
Access: Server (default)
Auth part:
Check Basic auth
then in Basic Auth Details fill in the master username and password
OpenSearch details part:
fill in the name of an index
make sure that a timestamp field exists in the index filled above and put the name of this field in Time field name
choose the right OpenSearch version 1.0.x
Test
I hope this will help you

Confused about filebeat AWS support

We are deploying multi-region (and possibly multi-cloud in the future).
Our ElasticSearch endpoint must thus be public.
I know I can add an IP-based policy on the AWS Elastic Search to essentially whitelist all endpoints which should be allowed to write their logs to the AWS ES service.
Looking for a "saner" alternative, I came across:
https://discuss.elastic.co/t/how-to-connect-beats-to-aws-elasticsearch-with-authentication/83465
and specially
https://forums.aws.amazon.com/thread.jspa?threadID=294252
the latter specifically saying:
Filebeat doesn't support IAM authentication so using it with this AWS
Elasticsearch service typically doesn't work.
However, I found this:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-aws.html#aws-credentials-options
which is the filebeat aws module, which seems to suggest that it actually may support it.
Couldn't find any official documentation nor blog which confirms that I could use filebeat on remote machines to send (authenticated? signed?) logs to a public AWS ElasticSearch endpoint, allowing me to keep the policy open without having to put a whitelist (or maybe I need both).

Connecting to VPC-based Kibana using AWS Cognito

I'm a beginner to AWS and a bit confused regarding the AWS Cognito system.
I have an AWS Elasticsearch service behind a VPC. I'm trying to access the Kibana endpoint using AWS Cognito, but when I navigate to the log in screen I see a blank page.
Is what I'm trying to do feasible or do I need to VPN first into the VPC in order to get to the log in screen? If so, how can I grant public users access to Kibana (without the trouble of a VPN?) Would a better solution be to have a reverse proxy point to Kibana and couple this with AWS Cognito? Thanks for your help.
Note: I'm using Elasticsearch 6.2
I had the same output while I was setting up the same setup but don't remember exactly which stage solved it.
At this stage it looks like you already set your access policy to use the Cognito role otherwise you wouldn't end up on the cognito login page (even though blank for now).
I would check the identity provider config on the Cognito User Pool App client settings:
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html#es-cognito-auth-identity-providers
It wasn't linked to a lack of access (ie. no VPN) or a missing UI customization and it's definitely achievable.
Let me know if you want me to look deeper into it.
EDIT: when i go directly to the domain url of my cognito pool (ie. https://yourdomain.auth.your-region.amazoncognito.com) I still get a blank page. It's only when going to the protected application (kibana) that the login page is filled in (probably linked to the above app client settings).
I created a wiki page in my GitHub Repo because I did this EXACT (Public ESS and Cognito) same thing over the last couple days. You can get the info HERE and I hope it helps clear things up!

VPC activating message

I got a mail from AWS that shows that I am activated a new VPC security group. But I removed all instances and VPC from my account. The mail shows like below:
Dear Amazon EC2 Customer,
Thank you for activating the Virtual Private Cloud (VPC) service for your Amazon Web Services account. Here are a few useful resources to help you familiarize yourself with VPC:
May I know the reason for this.
Thanks in advance.
Regards,
Eleena Jose
Based upon my experience, when you first use an AWS resource, Amazon sends you an introductory email with links to learn more about that service.
This does not mean that you are currently using the service. IMHO this is just an educational email to a) provide you with links to learn more b) to remind you that you are using a service in case you forget c) promote AWS.