What benefits, if any, exist for selecting the host endpoint amazonaws.com versus api.aws for connecting programmatically to an AWS service?
If possible, it seems best to use amazonaws.com when you can leverage the fips-enabled endpoint (service-fips.region.amazonaws.com) but that seems to be the only difference found.
For example, the service endpoints for Lambda, for the us-east-2 region include the following available endpoints:
lambda.us-east-2.amazonaws.com
lambda-fips.us-east-2.amazonaws.com
lambda.us-east-2.api.aws
To ask this in another way... would you ever use lambda.us-east-2.api.aws, and if so, why?
Related
I'm using aws OpenSearch in a private vpc.
I've about 10000 entries under some index.
For local development i'm running an local OpeanSearch container and i'd like to export all the entries from the OpenSearch service into my local container.
I can get all the entries from the OpeanSerch API but the format of the response is different then the format that should be when doing _bulk operation.
Can someone please tell me how should i do it?
Anna,
There are different strategies you can take to accomplish this, considering the fact that your domain is running in a private VPC.
Option 1: Exporting and Importing Snapshots
From the security standpoint, this is the recommended option, as you are moving entire indices out of the service without exposing the data. Please follow the AWS official documentation about how to create custom index snapshots. Once you complete the steps, you will have an index snapshot stored on an Amazon S3 bucket. After this, you can securely download the index snapshot to your local machine, then follow the instructions on the official OpenSearch documentation about how to restore the index snapshots.
Option 2: Using VPC Endpoints
Another way for you to export the data from your OpenSearch domain is accessing the data via a alternate endpoint using the VPC Endpoints feature from AWS OpenSearch. It allows you to to expose additional endpoints running on public or private subnets within the same VPC, different VPC, or different AWS accounts. In this case, you are essentially create a venue to access the OpenSearch REST APIs outside of the private VPC, to which you need to take care of who other than you will be able to do so as well. Please follow the best practices related to secure endpoints if you follow this option.
Option 3: Using the ElasticDump Open Source Utility
The ElasticDump utility allows you to retrieve data from Elasticsearch/OpenSearch clusters in a format of your preference, and then import that data back to another cluster. It is a very flexible way for you to move data around—but it requires the utility to access the REST API endpoints from the cluster. Run this utility in a bastion server that has ingress access to your OpenSearch domain in the private VPC. Keep in mind, though, that AWS doesn't provide any support to this utility, and you must use it at your own risk.
I hope that helps with your question. Let us know if you need any more help on this. đŸ™‚
We are deploying multi-region (and possibly multi-cloud in the future).
Our ElasticSearch endpoint must thus be public.
I know I can add an IP-based policy on the AWS Elastic Search to essentially whitelist all endpoints which should be allowed to write their logs to the AWS ES service.
Looking for a "saner" alternative, I came across:
https://discuss.elastic.co/t/how-to-connect-beats-to-aws-elasticsearch-with-authentication/83465
and specially
https://forums.aws.amazon.com/thread.jspa?threadID=294252
the latter specifically saying:
Filebeat doesn't support IAM authentication so using it with this AWS
Elasticsearch service typically doesn't work.
However, I found this:
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-aws.html#aws-credentials-options
which is the filebeat aws module, which seems to suggest that it actually may support it.
Couldn't find any official documentation nor blog which confirms that I could use filebeat on remote machines to send (authenticated? signed?) logs to a public AWS ElasticSearch endpoint, allowing me to keep the policy open without having to put a whitelist (or maybe I need both).
i want to be able to authenticate/authorize clients to produce/consume messages on certain topics. they would be part of our vpn (incl. aws). as i understand the available documentation the only option to do this is to issue client certificates and setup ACLs based on the clients DNs? Unfortunately i was not able to use my private CA (that i've created on my linux laptop) to create client certs. so the following questions arise:
is it correct that i need to setup an AWS hosted CA (ACM PCA). that would result in almost twice the setup costs incl. the minimum broker configs.
i could proxy the outer world into the msk cluster via something like "kafka rest proxy" from confluent - correct?
am i missing something? is there an easier way built into AWS?
please enlighten me :)
thanks in advance
marcel
Yes, I believe that's correct. To do client authentication over TLS, you need to provide the ARN of your private CA that's set up with AWS PCM at the time the cluster is created - and you have to use the aws command-line tool (aws kafka create-cluster ...) to create the cluster. The UI (last time I looked) didn't have anywhere to specify that ARN.
I don't know - we bit the bullet and set up a private CA with ACM.
Nope. We're hoping that eventually AWS will integrate IAM so you can authenticate as an IAM user instead of a client certificate, but that's not where it stands today. Today, it's client certificate only for authentication.
Support for Username and Password Security looks like what you want? I think it's new..
There's AWS Cognito which you might want to try https://aws.amazon.com/cognito/
In Api Gateway I've created one custom domain, foo.example.com, which creates a Cloud Front distribution with that CNAME.
I also want to create a wildcard domain, *.example.com, but when attempting to create it, CloudFront throws an error:
CNAMEAlreadyExistsException: One or more of the CNAMEs you provided
are already associated with a different resource
AWS in its docs states that:
However, you can add a wildcard alternate domain name, such as
*.example.com, that includes (that overlaps with) a non-wildcard alternate domain name, such as www.example.com. Overlapping domain
names can be in the same distribution or in separate distributions as
long as both distributions were created by using the same AWS account.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-wildcard
So I might have misunderstood this, is it possible to accomplish what I've described?
This is very likely to be a side-effect of your API Gateway endpoint being configured as Edge Optimized instead of Regional, because with an edge-optimized API, there is a hidden CloudFront distribution provisioned automatically... however, the CloudFront distribution associated with your API is not owned by your account, but rather by an account associated with API Gateway.
Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution that is created and managed by API Gateway.
— Amazon API Gateway Supports Regional API Endpoints
This creates a conflict that prevents the wildcard distribution from being created.
Subdomains that mask a wildcard are not allowed to cross AWS account boundaries, because this would potentially allow traffic for a wildcard distribution's matching domains to be hijacked by creating a more specific alternate domain name -- but, as you noted from the documentation, you can do within your own account.
Redeploying your API as Regional instead of Edge Optimized is the likely solution. If you still want the edge optimization behavior, you can create another CloudFront distribution with that specific subdomain for use with the API. This would be allowed, because you would own the distribution. Regional APIs are still globally accessible.
Yes it is. But keep in mind that CNAMEs set for CloudFront distributions are validated to be globally unique, including API Gateway distributions. So this means you (or any other account) have that CNAME already set up. Currently there is no way to lookup where the conflict is, you may need to raise a ticket with AWS support if you can't find that yourself.
I would like to use AWS Lambda to perform a computation on behalf of a 3rd party and then prove to them that I did so as intended. A proof would be a cryptographically signed digest of the function body, the request, and the response. Ideally, Amazon would sign the digest with its own private key and publish their public key to allow verification of the signature. The idea is similar to the "secure enclave" that new Intel chips provide through SGX (Software Guard Extensions).
The existing Lambda service has some of the ingredients needed. For example, the GetFunction response includes a CodeSha256 field that uniquely identifies the function implementation. And the Amazon API Gateway allows you to make HTTPS requests to the Lambda service, which might allow a TLSNotary-style proof of the request-response contents. But to do this right I think AWS Lambda needs to provide the signature directly.
Microsoft Azure is working on trusted software enclaves ("cryptlets") in their Project Bletchley:
https://github.com/Azure/azure-blockchain-projects/blob/master/bletchley/bletchley-whitepaper.md
https://github.com/Azure/azure-blockchain-projects/blob/master/bletchley/CryptletsDeepDive.md
Is something like this possible with the current AWS Lambda?
Let's make some definitions first, Lambda isn't a server but a service that runs your code. it does not provide any signature directly but rather what you configure for it on AWS.
The Secure Enclave is one implementation or a type of TPM (Trusted Platform Module), this can be done in many ways and the Secure Enclave is one of the best.
The short answer to your question is yes it can be done as long as you implement the needed code and add all the required configuration, SSL etc.
I would advide you to read the following:http://ieeexplore.ieee.org/document/5703613/?reload=true
And in case you want to have a TPM out of the box you can use microsoft project: https://github.com/Microsoft/TSS.MSR
AWS has different approach according to security. You can set what can use particular resource, and which way.
For sure you can do what was described. You can identify request, response, and exact version of code that was used. Question is if you want to sign code, when processing request. Easier way is to have that calculated on deploy.
For first case - you need language with access to source. Like with Python, you can get it, sign and return that, or store somewhere.
Second case - I would use tagging.
There is also another solution to the problem by using IAM. You can provision an IAM role for your customer that has read-access to the Lambda source code. By using the public lambda endpoint (the one that looks like https://api-id.execute-api.region.amazonaws.com/STAGE) - you can assure the customer that the request is directly hitting this specific lambda function.
The IAM role available to your customer has permissions to do the following:
View the lambda code and other details across all revisions
Read the API gateway configuration to validate that the request directly hits the lambda, and doesn't go elsewhere.
All your customer needs to do then is setup auditing at their end against lambda by using the given IAM role. They can setup a periodic cron that downloads all versions of your lambda as it is updated. If you have a pre-review process - that can be easily configured against their alerting.
Note that this relies on "AWS" running in good faith and the underlying assumptions being:
AWS Lambda is running the code it is configured against.
AWS management APIs return correct responses.
The time-to-alert is reasonable. This is easier, since you can download previous lambda code versions as well.
All of these are reasonable assumptions.