Can I authenticate rabbitmq connection using aws roles, and how? - amazon-web-services

I am hosting a rabbitmq cluster on AWS EC2. Is it possible for my remote application to authenticate a connection to the rabbitmq server using AWS IAM roles or any other way using AWS services? The reason is that my IoT devices will have AWS credentials configured, but I do not want to install SSL certificates individually on each of them. I am using Python pika library and my application currently works with plain credentials (username/password) in the dev environment.

No you cannot although why not store the credentials in secrets manager then allow the credentials for your IoT device to read from that?

Related

Expose websocket endpoint for RabitMQ from aws Broker

I have launched a RabbitMQ instance from AWS Broker service. I plan to be able to connect to this from my frontend application too and as such would want a websocket endpoint. I am wondering if AWS exposes a websoket endpoint natively? I came across some documentation which had images showing a wss endpoint under the connection details in the aws console. Please find the attached image for reference.
I don't see this option under the connection of my aws console. Image for reference
I could always launch a separate instance and have it work as a relay, but was just wondering if this is available from within AWS

AWS DocumentDB connecting with python and TLS pem bundle questions

We use AWS fargate with a python project. AWS default setup is using a PEM file when connecting. I know I can turn of TLS.
My coworker says he doesn't want to store credentials in the same repo as code. What is the recommended storage location of that file?
Why do I need it when the servers inside a VPC?
Do I need a different PEM file if I create a cluster on AWS govcloud or does the bundle include all I need?
Do I need it if I'm using an AWS linux 2 instance?
Please find the answers:
You can store the PEM file anywhere(same repository, or any other repository where the code can pull from), but it should be accessible to the code when making an encrypted connection and perform server validation.
The communication between the servers in VPC is private, but using a server certificate provides an extra layer of security by validating that the connection is being made to an Amazon DocumentDB cluster.
An Amazon DocumentDB cluster in GovCloud region should have a similar separate bundle for TLS connection.
Even if you are using Amazon Linux 2 instance, the PEM certificate file would need to be stored on the instance to allow the code to refer it and validate when opening a connection.
It is always a best practice from security point of view to use TLS and authenticate the server with the certificate.

Programmatically authenticating and accessing service inside AWS EKS cluster from outside when ALB is used

We build a Kubernetes application that is deployed by our users, our users connect to the deployed API server using a client and then use that client to submit jobs.
This question is about programmatically connecting to an application running inside Kubernetes cluster from outside of the cluster.
We have this working with local Kubernetes deployments and Google Kubernetes Engine (using IAP).
However some of our users on Amazon cloud cannot connect to the application.
I do not have much experience with AWS. I'm used to token-based auth and OAuth-like auth methods where authentication happens outside of a library: the user is redirected to some page where they log into a service and the client library only gets a token without ever seeing the password.
One of our users have implemented an auth solution that takes username and password and then uses Selenium to emulate the login process and get a cookie which is then used for sending requests. https://github.com/kubeflow/pipelines/pull/4182
Here is a quote from the PR.
Currently, kfp client can not be used outside AWS EKS cluster. Application load balancer manages outside traffic and require authentication before traffic coming into mesh. This PR automates ALB authentication and get session cookie to authenticate KFP python client to Kubeflow cluster.
This unblocks user to submit pipeline/run outside kubeflow cluster and user can integrate with their CI/CD solutions much easier.
Cognito or OIDC behind ALB both can leverage this solution.
Is there a better way to authenticate with AWS EKS ALB?
I've searched the AWS documentation for programmatic authentication methods, but did not find what I wanted (the docs mostly focused on server-side auth setup). In my last search I found the following article, but I'm not 100% sure it covers what our AWS users want.

We are not using ADFS when we log onto AWS servers

I am going over IAM topic. Understood about Active Directory Federation service (ADFS). We just started on a project. We are going to host a vendor product that we use here on premise onto AWS. I RDP (remote into) into AWS 2012 servers from my office network. When I log onto AWS windows 2012 servers, I see my credentials already on AWS servers. I am pretty sure we are not using ADFS to authenticate users. What else could we be using when we RDP onto AWS servers. I can see my on premise file servers when I log onto AWS servers. Is it possible that when our cloud platform engineers have setup AWS servers, they configured in such a way we can see our on prem servers?
I'm assuming you used your domain credentials to RDP to the servers in AWS... The AWS servers would need to be joined to the domain and there would need to be a route from the VPC in AWS back to your on-prem infrastructure in order for either of those things to work, so it sounds like it was configured prior to you logging onto it.

Disadvantage of SMTP setup in AWS server

I want to configure SMTP setup in AWS instance to get disk usage of AWS instance.
Is there any disadvantage or any point that we should keep in our mind before setup the SMTP in AWS server.
I am using mailutils for setup the SMTP by following given link.
I am using EC2 instance.
Is there any way to setup email functionality in AWS ec2 instance.
I would not set up SMTP for this purpose
Firstly, AWS generally block SMTP mail sending. See this discussion on server fault
https://serverfault.com/questions/165854/my-ec2-instances-email-is-being-spam-blocked-by-gmail
Secondly, AWS has it's own built in monitoring and metrics system called "Cloudwatch" Unfortunately it does not directly support disk space monitoring straight out of the box but AWS do tell you how to do it http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/mon-scripts.html