Confused about filebeat AWS support - amazon-web-services

We are deploying multi-region (and possibly multi-cloud in the future).
Our ElasticSearch endpoint must thus be public.
I know I can add an IP-based policy on the AWS Elastic Search to essentially whitelist all endpoints which should be allowed to write their logs to the AWS ES service.
Looking for a "saner" alternative, I came across:
https://discuss.elastic.co/t/how-to-connect-beats-to-aws-elasticsearch-with-authentication/83465
and specially
https://forums.aws.amazon.com/thread.jspa?threadID=294252
the latter specifically saying:
Filebeat doesn't support IAM authentication so using it with this AWS
Elasticsearch service typically doesn't work.
However, I found this:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-aws.html#aws-credentials-options
which is the filebeat aws module, which seems to suggest that it actually may support it.
Couldn't find any official documentation nor blog which confirms that I could use filebeat on remote machines to send (authenticated? signed?) logs to a public AWS ElasticSearch endpoint, allowing me to keep the policy open without having to put a whitelist (or maybe I need both).

Related

How to audit changes to the AWS account

I wanted to know if there was a way to track alerts or audit anything that happens with the AWS account like who changed what and why. I did find this https://docs.aws.amazon.com/opensearch-service/latest/developerguide/audit-logs.html where they use a comand line for enabling audit logs on an existing domain: aws opensearch update-domain-config --domain-name my-domain --log-publishing-options "AUDIT_LOGS={CloudWatchLogsLogGroupArn=arn:aws:logs:us-east-1:123456789012:log-group:my-log-group,Enabled=true}" but this is in regard to Amazon OpenSearch Service which I believe is only free for 12 months if you haven't used already. AWS Audit Manager. I am aware there are services that can do this but require a fee and I wanted to know if there were any free options
From the AWS documentation:
With AWS CloudTrail, you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account, including API calls made by using the AWS Management Console, the AWS SDKs, the command line tools, and higher-level AWS services. You can also identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address from which the calls were made, and when the calls occurred. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of your trails, and control how administrators turn CloudTrail logging on and off.
AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related to one another, and how the configurations and their relationships have changed over time.
Basically, AWS CloudTrail keeps a log of API calls (requests to AWS to do/change stuff), while AWS Config tracks how individual configurations have changed over time (for a limited range of resources, such as Security Group rule being changed).

Accessing AWS S3 from within google GCP

We were doing most of our cloud processing (and still do) using AWS. However, we also now have some credits on GCP and would like to use and want to explore interoperability between the cloud providers.
In particular, I was wondering if it is possible to use AWS S3 from within GCP. I am not talking about migrating the data but whether there is some API which will allow AWS S3 to work seamlessly from within GCP. We have a lot of data and databases that are hosted on AWS S3 and would prefer to keep everything there as it still does the bulk of our compute.
I guess one way would be to transfer the AWS keys to the GCP VM and then use the boto3 library to download content from AWS S3 but I was wondering if GCP, by itself, provides some other tools for this.
From an AWS perspective, an application running on GCP should appear logically as an on-premises computing environment. This means that you should be able to leverage the services of AWS that can be invoked from an on-premises solution. The GCP environment will have Internet connectivity to AWS which should be a pretty decent link.
In addition, there is a migration service which will move S3 storage to GCP GCS ... but this is distinct from what you were asking.
See also:
Getting started with Amazon S3
Storage Transfer Service Overview

AWS MSK User/Password Authentication/Authorization

i want to be able to authenticate/authorize clients to produce/consume messages on certain topics. they would be part of our vpn (incl. aws). as i understand the available documentation the only option to do this is to issue client certificates and setup ACLs based on the clients DNs? Unfortunately i was not able to use my private CA (that i've created on my linux laptop) to create client certs. so the following questions arise:
is it correct that i need to setup an AWS hosted CA (ACM PCA). that would result in almost twice the setup costs incl. the minimum broker configs.
i could proxy the outer world into the msk cluster via something like "kafka rest proxy" from confluent - correct?
am i missing something? is there an easier way built into AWS?
please enlighten me :)
thanks in advance
marcel
Yes, I believe that's correct. To do client authentication over TLS, you need to provide the ARN of your private CA that's set up with AWS PCM at the time the cluster is created - and you have to use the aws command-line tool (aws kafka create-cluster ...) to create the cluster. The UI (last time I looked) didn't have anywhere to specify that ARN.
I don't know - we bit the bullet and set up a private CA with ACM.
Nope. We're hoping that eventually AWS will integrate IAM so you can authenticate as an IAM user instead of a client certificate, but that's not where it stands today. Today, it's client certificate only for authentication.
Support for Username and Password Security looks like what you want? I think it's new..
There's AWS Cognito which you might want to try https://aws.amazon.com/cognito/

Site to Site connection between SonicWall and AWS - IAM Policy

I'm trying to set up a Site to Site connection between our on-premise server and our cloud infrastructure. In our premises we have a SonicWall firewall installed and, since SonicOS 6.5.1.0 it's now easy to put an AWS access key and AWS Secret Key and let the software configure everything via SDK.
The problem is that the tutorial on how to configure the firewall (p. 8) says:
The security policy used, either for a group to which the user belongs or attached to the user directly, must
include the following permissions:
• AmazonEC2FullAccess – For AWS Objects and AWS VPN
• CloudWatchLogsFullAccess – For AWS Logs
Since it's not ideal to give anyone the full access to Amazon EC2 do you know which features SonicWall actually needs so I can disable everything else and follow the principle of least privilege?
Without looking into the code for SonicWall itself, it is not going to be easy to know exactly which API calls it's going to make to EC2. If you are prepared to at least temporarily grant full EC2 access, you could use AWS CloudTrail to monitor exactly which API calls are being made by the IAM user associated with your on-premises server, and then update your specific policy to match those calls.
Alternatively, start with the full access IAM policy template and go through and deny any calls you think are completely unrelated to SonicWall's functionality.
If you trust SonicWall then probably the easiest thing to do is to just allow the full EC2 access it claims is required (or start there and gradually remove them until something breaks!)

What credential is used inside AWS EKS to access AWS service such as SNS.

How to configure the credential to use AWS service from inside the EKS. I can not use AWS SDK for this specific purpose. I have mentioned a role with required permission in the yaml file but it does not seem like it is picking up the role.
ThankYou
Any help is appreciated.
Typically you'd want to apply some level of logic to allow the pods themselves to obtain IAM credentials from STS. AWS does not currently (its re:Invent now so you never know) provide a native-way to do this. The two community solutions we've implemented are:
kube2IAM: https://github.com/jtblin/kube2iam
kIAM: https://github.com/uswitch/kiam
Both work well in production/large environments in my experience. I prefer kIAM's security model, but both get the job done.
Essentially the work the same basic way ... intercepting (for lack of a better word) communications b/t the SDK libraries in the container and STS, matching identity of the pod with an internal role dictionary, and then obtaining STS credentials for that role and handing those creds back to the container. The SDK isn't inherently aware its in a container, its just doing what it does anywhere ... walking its access tree until it sees the need to obtain creds from STS and receiving those.