I have hit a roadblock using the loopback-component-storage with Amazon S3.
As a test, I am trying to upload a file to S3 from my browser app, which is calling my loopback API on the backend.
My server config for datasources.json looks like:
"s3storage": {
"name": "s3storage",
"connector": "loopback-component-storage",
"provider": "amazon",
"key": “blahblah”,
"keyId": “blahblah”
},
My API endpoint is:
‘/api/Storage’
The error response I am getting from the API is as follows:
. error: {name: "MissingRequiredParameter", status: 500, message: "Missing required key 'Bucket' in params",…}
. code: "MissingRequiredParameter"
. message: "Missing required key 'Bucket' in params"
. name: "MissingRequiredParameter"
. stack: "MissingRequiredParameter: Missing required key 'Bucket' in params …”
. status: 500
. time: "2015-03-18T01:54:48.267Z"
How do i pass the {“params”: {“Bucket”: “bucket-name”}} parameter to my loopback REST API?
Please advice. Thanks much!
AFAIK Buckets are known as Containers in the loopback-component-storage or pkgcloud world.
You can specify a container in your URL params. If your target is /api/Storage then you'll specify your container in that path with something like /api/Storage/container1/upload as the format is PATH/:DATASOURCE/:CONTAINER/:ACTION.
Take a look at the tests here for more examples:
https://github.com/strongloop/loopback-component-storage/blob/4e4a8f44be01e4bc1c30019303997e61491141d4/test/upload-download.test.js#L157
Bummer. "container" basically translates to "bucket" for S3. I was trying to pass the params object via POST but the devil was in the details i.e. the HTTP POST path for upload was looking for the bucket/container in the path itself. /api/Storage/abc/upload meant 'abc' was the bucket.
Related
I'm looking for a programmatic way to retrieve parameters just by giving the name or a part of the complete path (instead of giving the full path with the name).
It's pretty easy using the Parameter Store AWS Systems Manager console, if I type tokens, I retrieve all parameters where the Name contains tokens :
Is there a way to do the same but using AWS CLI or AWS SDK (python or Go preferably) ?
I think this is what you are after:
aws ssm describe-parameters --parameter-filters Key=Name,Values=token,Option=Contains
Or with Python:
import boto3
response = boto3.client("ssm").describe_parameters(
ParameterFilters=[
{
'Key': 'Name',
'Option': 'Contains',
'Values': [
'token',
]
},
]
)
I try add expire days to a file and bucket but I have this problem:
sudo s3cmd expire s3://<my-bucket>/ --expiry-days=3 expiry-prefix=backup
ERROR: Error parsing xml: syntax error: line 1, column 0
ERROR: not found
ERROR: S3 error: 404 (Not Found)
and this
sudo s3cmd expire s3://<my-bucket>/<folder>/<file> --expiry-day=3
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3:////'
How to add expire days in DO Spaces for a folder or file by using s3cmd?
Consider configuring Bucket's Lifecycle Rules
Lifecycle rules can be used to perform different actions on objects in a Space over the course of their "life." For example, a Space may be configured so that objects in it expire and are automatically deleted after a certain length of time.
In order to configure new lifecycle rules, send a PUT request to ${BUCKET}.${REGION}.digitaloceanspaces.com/?lifecycle
The body of the request should include an XML element named LifecycleConfiguration containing a list of Rule objects.
https://developers.digitalocean.com/documentation/spaces/#get-bucket-lifecycle
The expire option is not implemented on Digital Ocean Spaces
Thanks to Vitalii answer for pointing to API.
However API isn't really easy to use, so I've done it via NodeJS script.
First of all, generate your API keys here: https://cloud.digitalocean.com/account/api/tokens
And put them in ~/.aws/credentials file (according to docs):
[default]
aws_access_key_id=your_access_key
aws_secret_access_key=your_secret_key
Now create empty NodeJS project, run npm install aws-sdk and use following script:
const aws = require('aws-sdk');
// Replace with your region endpoint, nyc1.digitaloceanspaces.com for example
const spacesEndpoint = new aws.Endpoint('fra1.digitaloceanspaces.com');
// Replace with your bucket name
const bucketName = 'myHeckingBucket';
const s3 = new aws.S3({endpoint: spacesEndpoint});
s3.putBucketLifecycleConfiguration({
Bucket: bucketName,
LifecycleConfiguration: {
Rules: [{
ID: "autodelete_rule",
Expiration: {Days: 30},
Status: "Enabled",
Prefix: '/', // Unlike AWS in DO this parameter is required
}]
}
}, function (error, data) {
if (error)
console.error(error);
else
console.log("Successfully modified bucket lifecycle!");
});
I was trying to set up encrypted RDS replica in another region, but I got stuck on generating pre-signed URL.
It seems that boto3/botocore does not allow DestinationRegion parameter, which is defined as a requirement on AWS API (link) in case we want to generate PreSignedUrl.
Versions used:
boto3 (1.4.7)
botocore (1.7.10)
Output:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "DestinationRegion", must be one of: DBInstanceIdentifier, SourceDBInstanceIdentifier, DBInstanceClass, AvailabilityZone, Port, AutoMinorVersionUpgrade, Iops, OptionGroupName, PubliclyAccessible, Tags, DBSubnetGroupName, StorageType, CopyTagsToSnapshot, MonitoringInterval, MonitoringRoleArn, KmsKeyId, PreSignedUrl, EnableIAMDatabaseAuthentication, SourceRegion
Example code:
import boto3
url = boto3.client('rds', 'eu-east-1').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DestinationRegion': 'eu-east-1',
'SourceDBInstanceIdentifier': 'abc',
'KmsKeyId': '1234',
'DBInstanceIdentifier': 'someidentifier'
},
ExpiresIn=3600,
HttpMethod=None
)
Same issue was already reported but got closed.
Thanks for help,
Petar
Generate Pre signed URL from the source region, then populate the create_db_instance_read_replica with that url.
The presigned URL must be a valid request for the CreateDBInstanceReadReplica API action that can be executed in the source AWS Region that contains the encrypted source DB instance
PreSignedUrl (string) --
The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica API action in the source AWS Region that contains the source DB instance.
import boto3
session = boto3.Session(profile_name='profile_name')
url = session.client('rds', 'SOURCE_REGION').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DBInstanceIdentifier': 'db-1-read-replica',
'SourceDBInstanceIdentifier': 'database-source',
'SourceRegion': 'SOURCE_REGION'
},
ExpiresIn=3600,
HttpMethod=None
)
print(url)
source_db = session.client('rds', 'SOURCE_REGION').describe_db_instances(
DBInstanceIdentifier='database-SOURCE'
)
print(source_db)
response = session.client('rds', 'DESTINATION_REGION').create_db_instance_read_replica(
SourceDBInstanceIdentifier="arn:aws:rds:SOURCE_REGION:account_number:db:database-SOURCE",
DBInstanceIdentifier="db-1-read-replica",
KmsKeyId='DESTINATION_REGION_KMS_ID',
PreSignedUrl=url,
SourceRegion='SOURCE'
)
print(response)
I am trying the api gateway validation example from here https://github.com/rpgreen/apigateway-validation-demo . I observed that from the given swagger.json file, minItems is not imported into the models which got created during the swagger import.
"CreateOrders": {
"title": "Create Orders Schema",
"type": "array",
"minItems" : 1,
"items": {
"type": "object",
"$ref" : "#/definitions/Order"
}
}
Because of this when you give an empty array [ ] as input, instead of throwing an error about minimum items in an array, the api responds with a message 'created orders successfully'.
When I manually add the same from the API gateway console UI, it seems to work as expected. Am i missing something or this is a bug in the importer?
This is a known issue with the Swagger import feature of API Gateway.
From http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html
The maxItems and minItems tags are not included in simple request validation. To work around this, update the model after import before doing validation.
I'v got user with all permissions.
{
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
I'm using aws-sdk-php-2 to put and copy objects in bucket.
http://docs.aws.amazon.com/aws-sdk-php-2/latest/class-Aws.S3.S3Client.html
Put code works perfect
$client->putObject(array(
'Bucket' => 'kiosk',
'Key' => 'test/orders/test.csv',
'SourceFile' => $sourcePath,
));
After check if object created on S3 via https://console.aws.amazon.com/s3 I'm executing next script.
$result = $client->copyObject(array(
'Bucket' => 'kiosk',
'CopySource' => 'test/orders/test.csv',
'Key' => 'test/test.csv',
));
And I'm getting fatal error:
Fatal error: Uncaught Aws\S3\Exception\S3Exception: AWS Error Code: AllAccessDisabled, Status Code: 403, AWS Request ID: XXX, AWS Error Type: client, AWS Error Message: All access to this object has been disabled, User-Agent: aws-sdk-php2/2.2.1 Guzzle/3.3.1 curl/7.19.7 PHP/5.4.13 thrown in phar:///usr/share/pear/AWSSDKforPHP/aws.phar/src/Aws/Common/Exception/NamespaceExceptionFactory.php on line 89
After upload file manually console.aws.amazon.com/s3 I see different error when trying to copy:
Fatal error: Uncaught Aws\S3\Exception\AccessDeniedException: AWS Error Code: AccessDenied, Status Code: 403, AWS Request ID: XXX, AWS Error Type: client, AWS Error Message: Access Denied, User-Agent: aws-sdk-php2/2.2.1 Guzzle/3.3.1 curl/7.19.7 PHP/5.4.13 thrown in phar:///usr/share/pear/AWSSDKforPHP/aws.phar/src/Aws/Common/Exception/NamespaceExceptionFactory.php on line 89
I also try to set permissions on file and folder via console.aws.amazon.com/s3:
Grantee: Everyone, Open/Download and View Permission and Edit Permission
But still same error.
I know this is an old question, but I ran into the same issue recently while doing work on a legacy project.
$this->client->copyObject([
'Bucket' => $this->bucket,
'CopySource' => $file,
'Key' => str_replace($source, $destination, $file),
]);
All of the my other S3 calls worked except for copyObject continued to throw an ACCESS DENIED error. After some digging, I finally figured out why.
I was passing just the key and making the assumption that the bucket being passed was what both the source and destination would use. Turns out that is an incorrect assumption. The source must have the bucket name prefixed.
Here was my solution:
$this->client->copyObject([
'Bucket' => $this->bucket,
// Added the bucket name to the copy source
'CopySource' => $this->bucket.'/'.$file,
'Key' => str_replace($source, $destination, $file),
]);
It says "Access Denied" because it thinks the first part of your key/folder is actually the name of the bucket which either doesn't exist or you really don't have access to.
Found out what the issue is here; being an AWS newbie I struggled here for a bit until I realized that each policy for the users you set needs to clearly allow the service you're using.
In this case I hadn't set the user to be allowed into S3.
Goto IAM then goto Users and click on the particular user that has the credentials you're using. From there goto Permissions tab, then click on Attach User Policy and find the S3 policy under select policy template. This should fix your problem.
Hope that helps!
Popular answer was on point, but still had issues. Had to include ACL option.
$this->client->copyObject([
'Bucket' => $this->bucket,
// Added the bucket name to the copy source
'CopySource' => $this->bucket.'/'.$file,
'Key' => str_replace($source, $destination, $file),
'ACL' => 'public-read'
]);
ACL can be one of these value 'ACL' => 'private|public-read|public-read-write|authenticated-read|aws-exec-read|bucket-owner-read|bucket-owner-full-control',