List objects from publicly accessible Amazon S3 bucket - amazon-web-services

I have one Amazon S3 bucket which is public with list and get permission. I want to list object in ruby. We can use AWS SDK to list objects but it require credentials. I want to list objects in ruby without using credentials. how to achieve this ??

I think you could use the HTTP Method.Amazon S3 support make requests to Amazon S3 endpoints by using the REST API.
I try putObject with HTTP Method and it work,I use the curl command.
But the Object owner is anonymous, i can't remove it.
And I am not familiar with Ruby,I think it also work with listObject without use SDK.
This is my curl command :
curl --request PUT --upload-file "./myobject" "https://${mybkt}......../myobject"
ListObject HTTP method Doc:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html

To use AWS SDK in any language, you need to create a Service Client in that given language, Ruby is no different from .NET, Python, Java, etc.
To make an AWS Service call from a Service Client (not CLI, or Rest, etc), you must specify creds. More information can be found in the AWS Ruby DEV Guide:
Configuring the AWS SDK for Ruby

Not all AWS SDKs expose options to make unsigned API requests. There is no option in the Ruby SDK.
You might want to comment on, or re-open, this previously-closed feature request in the Ruby SDK.

Related

Accessing ElasticSearch API on AWS

AWS recommends using its SDKs (such as boto3) or command-line tools to configure an ElasticSearch cluster.
However, some ElasticSearch API endpoints are not exposed in AWS APIs (e.g. _cat/shards).
Even some AWS support documents (such as this one on cluster rebalancing) seem to make direct request to the cluster API.
The trouble is: such requests need to be authenticated using AWS4Auth (only certain IAM roles have permissions to write to ElasticSearch, in my setup) – and even AWS recommends against making manually creating signed HTTP requests because it's such a pain.
My question is: do I need to manually create signed HTTP requests against my ES cluster in order to manage it, or is there an easier way that I've missed?
Based on the comments.
The proposed solution is to use third party aws-requests-auth:
This package allows you to authenticate to AWS with Amazon's signature version 4 signing process with the python requests library.
The example of its use for ElasticSearch is:
from aws_requests_auth.aws_auth import AWSRequestsAuth
from elasticsearch import Elasticsearch, RequestsHttpConnection
es_host = 'search-service-foobar.us-east-1.es.amazonaws.com'
auth = AWSRequestsAuth(aws_access_key='YOURKEY',
aws_secret_access_key='YOURSECRET',
aws_host=es_host,
aws_region='us-east-1',
aws_service='es')
# use the requests connection_class and pass in our custom auth class
es_client = Elasticsearch(host=es_host,
port=80,
connection_class=RequestsHttpConnection,
http_auth=auth)
print es_client.info()
I ended up finding this AWS doc on request signing for AWS ElasticSearch. It clearly shows that the intended approach is to use scripts, using the HTTP client of choice for the language.
As Marcin mentioned in hist answer, aws-requests-auth is one choice to simplifiy this in Python

How can I enable the API in AWS Managed Workflows for Apache Airflow?

I'm testing the waters for running Apache Airflow on AWS through the Managed Workflows for Apache Airflow (MWAA). The version of Airflow that AWS have deployed and are managing for me is 1.10.12.
When I try to access the v1 REST API at /api/experimental/test I get back status code 403 Forbidden.
Is it possible to enable the experimental API in MWAA? How?
I think MWAA provide a REST endpoint to use the CLI
https://$WEB_SERVER_HOSTNAME/aws_mwaa/cli
It's quite confusing because you fisrt need to create a cli-token using the awscli to then hit the endpoint using that token. You will need a policy to allow your awscli to request that token.
Lastly there isn't support for all the commands, just a bunch.
Anyway it's all explained on the user guide
https://docs.aws.amazon.com/mwaa/latest/userguide/amazon-mwaa-user-guide.pdf
By default, api.auth_backend configuration option is set to airflow.api.auth.backend.deny_all in MWAA environments. You need to override it to one of the authentication methods mentioned in the documentation as shown in the figure bellow:
Note: it is highly discouraged to use airflow.api.auth.backend.default as it'll
leave your environment publicly accessible.
[2021/07/29] Edit:
Based on this comment, AWS blocked access to the REST API.

Getting error while invoking API using AWS Lambda. (AWS Lambda + AWS API Gateway+ Postman)

I get an error while invoking the AWS SageMaker endpoint API from a Lambda function. When I call this using Postman, I am getting an error like:
{
"errorMessage": "module initialization error"
}
Just to make it clear, you can't call SageMaker endpoints directly using PostMan (even if it is, it would not be straightforward).
You may need to use AWS SDK (i.e. boto) for that.
Ref : https://aws.amazon.com/blogs/machine-learning/call-an-amazon-sagemaker-model-endpoint-using-amazon-api-gateway-and-aws-lambda/
What I would suggest is to create a small HTTP server with Flask and use the AWS SDK (Boto) to call the endpoint. Then you can call your Flask endpoint using PostMan.
We recommend using AWS SDK to invoke your endpoint. AWS SDK clients handle the serialization for you as well as request signing, etc. It would be really hard to get it right manually with postman.
We have the SDK client available in many languages, including Java, Python, JS, etc.
https://docs.aws.amazon.com/sagemaker/latest/dg/API_runtime_InvokeEndpoint.html#API_runtime_InvokeEndpoint_SeeAlso
Next time please include more details in your question. eg. POST request data, Headers etc.
Anyways, to help you out in calling Sagemaker endpoint using Postman -
In 'Authorization' tab, select type as 'AWS Signature'.
Enter your Access and Secret key of the IAM user which has permission to Sagemaker resources.
Enter the AWS region. eg.us-east-1
Enter 'Service Name' as 'sagemaker'
Select the right content type. Some ML algorithms only accept 'text/csv'.
Select request type as 'POST'
Enter the Sagemaker Invocation url. eg:'https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/xgboost-xxxx-xx-xx-xx-xx-xx-xxx/invocations'
Try it out and let me know if you have any issues.
Here is how your Postman should look -

Which method is secure for accessing the AWS services?

Other than AWS Console, there are multiple ways to access the AWS Services.
AWS CLI(awscli/AWSPowershell)
AWS SDK
REST API
Out of these three methods which is the most secure one?
Consider your thinking by the way - If you are working with AWSCLI, you need to store the credentials by using the command aws configure.
I know, without passing the credentials(AccessKey and SecretKey) for SDK,CLI and API, we cant access the AWS Services. But I hope still some other way will be there to access/manage the services.
In the end all of these ways call the AWS APIs, so from that perspective they are equally secure.
There are differences in the use of features of the APIs though. While the AWS CLI supports MFA authentication, only some SDKs do (e.g. boto3 does, the aws-sdk-js doesn't yet) and for accessing the APIs directly you would have to implement that yourself.
All of the methods mention have a similar degree of security. Based on how you store and use credentials, might affect the actual security strength.

Using S3 for saving images from mobile application

I am creating a backend service which will be getting requests from an Android application regarding creating of some service requests. These service requests will contain details about the the service items and also some images related to the request. We want to use S3 for storing the images directly from the android application and getting the key of the image saved through an API call on the backend service.
The problem with this approach is the authorization of the mobile application to access the shared bucket.
If we save the access key of the shared bucket in the application, this code can be decompiled and the secret will be compromised.
Another option is to create an API on the backend service which will give back the authorization key to the mobile application before it needs to put the image to S3. In this way we can also rotate the secrets periodically.
Which of these approach is better in terms of security? Is there any other approach which I am missing? It sounds like a standard access practice of using S3 for saving files, so there must be something for this particular scenario.
You don't need to invent an API to do this - AWS provides its STS service for just this use case.
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html
To request temporary security credentials, you can use the AWS STS API actions.
To call the APIs, you can use one of the AWS SDKs, which are available
for a variety of programming languages and environments, including
Java, .NET, Python, Ruby, Android, and iOS. The SDKs take care of
tasks such as cryptographically signing your requests, retrying
requests if necessary, and handling error responses. You can also use
the AWS STS Query API, which is described in the AWS Security Token
Service API Reference. Finally, two command line tools support the AWS
STS commands: the AWS Command Line Interface, and the AWS Tools for
Windows PowerShell.
The AWS STS API actions return temporary security credentials that
consist of an access key and a session token. The access key consists
of an access key ID and a secret key. Users (or an application that
the user runs) can use these credentials to access your resources.
When the credentials are created, they are associated with an IAM
access control policy that limits what the user can do when using the
credentials. For more information, see Using Temporary Security
Credentials to Request Access to AWS Resources.