Ember Lighting Deploy Strategy - S3 NoSuchBucket error - ember.js

Im deploying to S3 using ember-cli-deploy-lightning-pack. Ive followed various posts and screencasts setting this up.
on S3 i have a new bucket called emberdevlocal with nothing in it.
this is the snippet i have in my deploy.js file
if (deployTarget === 'dev') {
ENV.build.environment = 'development';
ENV.redis.url = process.env.REDIS_URL || 'redis://0.0.0.0:6379/';
ENV.s3.bucket = 'emberdevlocal.s3.amazonaws.com';
}
I have my region set to
ENV.s3.region = 'us-west-2';
I have currently set the bucket permissions to wide open to make sure there is nothing going on there.
When i run the deploy it fails about half way through.
it sets the domain correctly
Endpoint {
protocol: 'https:',
host: 's3-us-west-2.amazonaws.com',
port: 443,
hostname: 's3-us-west-2.amazonaws.com',
pathname: '/',
path: '/',
href: 'https://s3-us-west-2.amazonaws.com/',
constructor: [Object] },
region: 'us-west-2',
..........etc
its doing a PUT to
_header: 'PUT /emberdevlocal.s3.amazonaws.com/...........
I have the correct Keys being passed.
I just cant figure out why its timing out when trying to connect to the bucket.

I only put the bucket name for the ember-cli-deploy-s3 plugin. Try simply
ENV.s3.bucket = 'emberdevlocal';

I have come across an issue where having dots in the bucket name can cause issues. I believe this was an AWS problem. Could you please try a bucket that doesn't include dots in the name (use dashes instead or something) and let me know if that fixes your problem?

Related

Google Deployment Manager error when using manual IP allocation in NAT (HTTP 400)

Context
I am trying to associate serverless egress with a static IP address (GCP Docs). I have been able to set this up manually through the gcp-console, and now I am trying to implement it with deployment manager. However, with just the IP address and the router, once I add the NAT config, I get 400's, "Request contains an invalid argument.", which is not giving me enough information to fix the problem.
# config.yaml
resources:
# addresses spec: https://cloud.google.com/compute/docs/reference/rest/v1/addresses
- name: serverless-egress-address
type: compute.v1.address
properties:
region: europe-west3
addressType: EXTERNAL
networkTier: PREMIUM
# router spec: https://cloud.google.com/compute/docs/reference/rest/v1/routers
- name: serverless-egress-router
type: compute.v1.router
properties:
network: projects/<project-id>/global/networks/default
region: europe-west3
nats:
- name: serverless-egress-nat
natIpAllocateOption: MANUAL_ONLY
sourceSubnetworkIpRangesToNat: ALL_SUBNETWORKS_ALL_IP_RANGES
natIPs:
- $(ref.serverless-egress-address.selfLink)
# error response
code: RESOURCE_ERROR
location: /deployments/<deployment-name>/resources/serverless-egress-router
message: '{
"ResourceType":"compute.v1.router",
"ResourceErrorCode":"400",
"ResourceErrorMessage":{
"code":400,
"message":"Request contains an invalid argument.",
"status":"INVALID_ARGUMENT",
"statusMessage":"Bad Request","requestPath":"https://compute.googleapis.com/compute/v1/projects/<project-id>/regions/europe-west3/routers/serverless-egress-router",
"httpMethod":"PUT"
}}'
Notably, if I remove the 'natIPs' array and set 'natIpAllocateOption' to 'AUTO_ONLY', it goes through without errors. While this is not the configuration I need, it does narrow the problem down to these config options.
Question
Which is the invalid argument?
Are there things outside of the YAML which I should check? In the docs it says the following, which makes me wonder if there are other caveats like it:
Note that if this field contains ALL_SUBNETWORKS_ALL_IP_RANGES or ALL_SUBNETWORKS_ALL_PRIMARY_IP_RANGES, then there should not be any other Router.Nat section in any Router for this network in this region.
I checked the API reference and passing the values that you used should work. Furthermore, if you talk directly to the API using a JSON payload with these, it return 200:
{
"name": "nat",
"network": "https://www.googleapis.com/compute/v1/projects/project/global/networks/nat1",
"nats": [
{
"natIps": [
"https://www.googleapis.com/compute/v1/projects/project/regions/us-central1/addresses/test"
],
"name": "nat1",
"natIpAllocateOption": "MANUAL_ONLY",
"sourceSubnetworkIpRangesToNat": "ALL_SUBNETWORKS_ALL_IP_RANGES"
}
]
}
From what I can see the request is correctly formed using methods other than Deployment Manager so there might be an issue in the tool.
I have filed an issue about this on Google's Issue Tracker for them to take a look at it.
The DM team might be able to shed light on what's happening here.

s3 SignedUrl x-amz-security-token

const AWS = require('aws-sdk');
export function main (event, context, callback) {
const s3 = new AWS.S3();
const data = JSON.parse(event.body);`
const s3Params = {
Bucket: process.env.mediaFilesBucket,
Key: data.name,
ContentType: data.type,
ACL: 'public-read',
};
const uploadURL = s3.getSignedUrl('putObject', s3Params);
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({ uploadURL: uploadURL }),
})
}
When I test it locally it works fine, but after deployment it x-amz-security-token, and then I get access denied response. How can I get rid of this x-amz-security-token?
I was having the same issue. Everything was working flawlessly using serverless-offline but when I deployed to Lambda I started receiving AccessDenied issues on the URL. When comparing the URLs returned between the serverless-offline and AWS deployments I noticed the only difference was the inclusion of the X-Amz-Security-Token in the URL as a query string parameter. After some digging I discovered the token being assigned was based upon the assumed role the lambda function had. All I had to do was grant the appropriate S3 policies to the role and it worked.
I just solved a very similar, probably the same issue as you have. I say probably because you dont say what deployment entails for you. I am assuming you are deploying to Lambda but you may not be, this may or may not apply but if you are using temporary credentials this will apply.
I initially used the method you use above but then was using the npm module aws-signature-v4 to see if it was different and was getting the same error you are.
You will need the token, it is needed when you have signed a request with temporary credentials. In Lambda's case the credentials are in the runtime, including the session token, which you need to pass, the same is most likely true elsewhere as well but I'm not sure I haven't used ec2 in a few years.
Buried in the docs (and sorry I cannot find the place this is stated) it is pointed out that some services require that the session_token be processed with the other canonical query params. The module I'm using was tacking it on at the end, as the sig v4 instructions seem to imply, so I modified it so the token is canonical and it works.
We've updated the live version of the aws-signature-v4 module to reflect this change and now it works nicely for signing your s3 requests.
Signing is discussed here.
I would use the module I did as I have a feeling the sdk is doing the wrong thing for some reason.
usage example (this is wrapped in a multiPart upload thus the part number and upload Id):
function createBaseUrl( bucketName, uploadId, partNumber, objectKey ) {
let url = sig4.createPresignedS3URL( objectKey, {
method: "PUT",
bucket: bucketName,
expires: 21600,
query: `partNumber=${partNumber}&uploadId=${uploadId}`
});
return url;
}
I was facing the same issue, I'm creating a signed URL using library Boto3 in python3.7
All though this is not a recommended way to solve, it worked for me.
The request methods should be POST, content-type=['multipart/form-data']
Create a client in like this.
# Do not hard code credentials
client = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY'
)
Return response
bucket_name = BUCKET
acl = {'acl': 'public-read-write'}
file_path = str(file_name) //file you want to upload
response = s3_client.generate_presigned_post(bucket_name,
file_path,
Fields={"Content-Type": ""},
Conditions=[acl,
{"Content-Type": ""},
["starts-with", "$success_action_status", ""],
],
ExpiresIn=3600)

How to use AWS Video Rekognition with an unauthenticated identity?

i have followed the steps of the documentation but i received:
User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/CognitoRkUnauth_Role/CognitoIdentityCredentials is not authorized to perform: iam:PassRole on resource: arn:aws:iam::xxxxxxxxxx:role/CognitoRkUnauth_Role
The code fails en NotificationChannel. Without this i received the jobId correctly
var params = {
Video: {
S3Object: {
Bucket: 'mybucket',
Name: 'myvideoa1.mp4'
}
},
ClientRequestToken: 'LabelDetectionToken',
MinConfidence: 70,
NotificationChannel: {
SNSTopicArn: 'arn:aws:sns:us-east-1:xxxxxxxx:RekognitionVideo',
RoleArn: 'arn:aws:iam::xxxxxx:role/CognitoRkUnauth_Role'
},
JobTag: "DetectingLabels"
}
I set configuration to CognitoRkUnauth_Role instead of a iam user. Translation worked doing this.
In RoleArn I created another Role but it fails too.
I am not the root user.
I know I need to give more information but if someone can guide me, i will start again the configuration.
I am beginner in aws and i dont understand several things at all.
(english is not my first language)
Well, I had to create a new User. I think there is no reason for doing what i wanted to do :p

Digital Ocean Spaces | Add expire date for files with s3cmd

I try add expire days to a file and bucket but I have this problem:
sudo s3cmd expire s3://<my-bucket>/ --expiry-days=3 expiry-prefix=backup
ERROR: Error parsing xml: syntax error: line 1, column 0
ERROR: not found
ERROR: S3 error: 404 (Not Found)
and this
sudo s3cmd expire s3://<my-bucket>/<folder>/<file> --expiry-day=3
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3:////'
How to add expire days in DO Spaces for a folder or file by using s3cmd?
Consider configuring Bucket's Lifecycle Rules
Lifecycle rules can be used to perform different actions on objects in a Space over the course of their "life." For example, a Space may be configured so that objects in it expire and are automatically deleted after a certain length of time.
In order to configure new lifecycle rules, send a PUT request to ${BUCKET}.${REGION}.digitaloceanspaces.com/?lifecycle
The body of the request should include an XML element named LifecycleConfiguration containing a list of Rule objects.
https://developers.digitalocean.com/documentation/spaces/#get-bucket-lifecycle
The expire option is not implemented on Digital Ocean Spaces
Thanks to Vitalii answer for pointing to API.
However API isn't really easy to use, so I've done it via NodeJS script.
First of all, generate your API keys here: https://cloud.digitalocean.com/account/api/tokens
And put them in ~/.aws/credentials file (according to docs):
[default]
aws_access_key_id=your_access_key
aws_secret_access_key=your_secret_key
Now create empty NodeJS project, run npm install aws-sdk and use following script:
const aws = require('aws-sdk');
// Replace with your region endpoint, nyc1.digitaloceanspaces.com for example
const spacesEndpoint = new aws.Endpoint('fra1.digitaloceanspaces.com');
// Replace with your bucket name
const bucketName = 'myHeckingBucket';
const s3 = new aws.S3({endpoint: spacesEndpoint});
s3.putBucketLifecycleConfiguration({
Bucket: bucketName,
LifecycleConfiguration: {
Rules: [{
ID: "autodelete_rule",
Expiration: {Days: 30},
Status: "Enabled",
Prefix: '/', // Unlike AWS in DO this parameter is required
}]
}
}, function (error, data) {
if (error)
console.error(error);
else
console.log("Successfully modified bucket lifecycle!");
});

Writing a single file to multiple s3 buckets with gulp-awspublish

I have a simple single-page app, that is deployed to an S3 bucket using gulp-awspublish. We use inquirer.js (via gulp-prompt) to ask the developer which bucket to deploy to.
Sometimes the app may be deployed to several S3 buckets. Currently, we only allow one bucket to be selected, so the developer has to gulp deploy for each bucket in turn. This is dull and prone to error.
I'd like to be able to select multiple buckets and deploy the same content to each. It's simple to select multiple buckets with inquirer.js/gulp-prompt, but not simple to generate arbitrary multiple S3 destinations from a single stream.
Our deploy task is based upon generator-webapp's S3 recipe. The recipe suggests gulp-rename to rewrite the path to write to a specific bucket. Currently our task looks like this:
gulp.task('deploy', ['build'], () => {
// get AWS creds
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirname;
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'list',
name: 'dirname',
message: 'Using the ‘' + config.awsCreds.bucket + '’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirname = res.dirname;
}))
.pipe($.rename(function(path) {
path.dirname = dirname + '/dist/' + path.dirname;
}))
.pipe(publisher.publish())
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
It's hopefully obvious, but config.awsCreds might look something like:
awsCreds: {
dirname: 'default-bucket',
dirnames: ['default-bucket', 'other-bucket', 'another-bucket']
}
Gulp-rename rewrites the destination path to use the correct bucket.
We can select multiple buckets by using "checkbox" instead of "list" for the gulp-prompt options, but I'm not sure how to then deliver it to multiple buckets.
In a nutshell, if $.prompt returns an array of strings instead of a string, how can I write the source to multiple destinations (buckets) instead of a single bucket?
Please keep in mind that gulp.dest() is not used -- only gulp.awspublish() -- and we don't know how many buckets might be selected.
Never used S3, but if I understand your question correctly a file js/foo.js should be renamed to default-bucket/dist/js/foo.js and other-bucket/dist/js/foo.js when the checkboxes default-bucket and other-bucket are selected?
Then this should do the trick:
// additionally required modules
var path = require('path');
var through = require('through2').obj;
gulp.task('deploy', ['build'], () => {
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirnames = []; // array for selected buckets
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'checkbox', // use checkbox instead of list
name: 'dirnames', // use different result name
message: 'Using the ‘' + config.awsCreds.bucket +
'’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirnames = res.dirnames; // store array of selected buckets
}))
// use through2 instead of gulp-rename
.pipe(through(function(file, enc, done) {
dirnames.forEach((dirname) => {
var f = file.clone();
f.path = path.join(f.base, dirname, 'dist',
path.relative(f.base, f.path));
this.push(f);
});
done();
}))
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
Notice the comments where I made changes from the code you posted.
What this does is use through2 to clone each file passing through the stream. Each file is cloned as many times as there were bucket checkboxes selected and each clone is renamed to end up in a different bucket.