I'm trying to deploy a cloud function to a subfolder of my bucket like this
gcloud beta functions deploy $FUNCTION_NAME_STAGING --stage-bucket bucket/staging --trigger-http --entry-point handle
but I get this error
ERROR: (gcloud.beta.functions.deploy) argument --stage-bucket: Invalid value 'bucket/staging': Bucket must only contain lower case Latin letters, digits and characters . _ -. It must start and end with a letter or digit and be from 3 to 232 characters long. You may optionally prepend the bucket name with gs:// and append / at the end.
I cannot find a way to do it in the documentation. Is it possible ?
Related
I have S3 bucket which has thousands of folders having millions of files.
The problem is, many files has special characters like Comma, #, £ which is causing broken url.
Is there any way I can remove specific special characters from all the files?
I have tried using cli command aws s3 mv <source_file_name> <new_file_name>, But there also I am not able to access some of the files because of special characters.
Is there a way or script which can be useful?
Had an issue with not being able to push or pull from an AWS ECR registry with the following cryptic error:
error parsing HTTP 404 response body: invalid character 'p' after top-level value: "404 page not found\n"
Several hours of googling indicated it was a protocol issue. It turns out the image name:
xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/snowshu__test
was the issue: AWS ECR errors when the image name contains double underscores.
This contradicts ECR naming documentation.
You cannot have two underscores next to each other in a repository name.
As per the Docker Registry API:
A component of a repository name must be at least one lowercase, alpha-numeric characters, optionally separated by periods, dashes or underscores. More strictly, it must match the regular expression [a-z0-9]+(?:[._-][a-z0-9]+)*.
Renaming the image to
xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/snowshu_test
solved the issue.
I have noticed that AWS CloudFormation does not like special characters.
When I update a key:value pair in our pipeline.yml file with special char
e.g. PAR_FTP_PASS: ^XoN*H89Ie!rhpl!wan=Jcyo6mo, I see the following error:
parameters[5] ParameterKey, ParameterValue or UsePreviousValue expected
I am able to update the value through the AWS CloudFormation UI.
It seems like the issue is to do with AWS CloudFOrmation parsing the yml file.
Is there a workaround with this issue?
AWS Tags have some restrictions on what they can contain, see here:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions
A key note which can catch people out is: "Although EC2 allows for any character in its tags, other services are more restrictive. The allowed characters across services are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / #."
So I'd check if the service you are adding this onto can support that string.
I am creating an aws datapipline using the architect provided in the aws web console.
Everything is setup ok, my emrcluster is configured and successfully started.
But when I am trying to submit a emr activity I come across following problem:
In the step section of the emr activity my requirement is to provide --packages argument with 3 packages
But as far as I understand steps in emractivity is a comma separated value and commas (,) are replaced with spaces in the resultant step argument.
On the other hand --packages argument is also a comma separated value in case of multiple packages.
Now when I am trying to pass this as argument commas get replaced with spaces that make the step invalid.
This is the statement I required as it is in the resultant emr step:
--packages com.amazonaws:aws-java-sdk-s3:1.11.228,org.apache.hadoop:hadoop-aws:2.6.0,org.postgresql:postgresql:42.1.4
Any solution to escape the comma?
So far i try the \\\\ way as mentioned in http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-emractivity.html
Not worked.
when u will be using \\\\, it will escape the slashes and comma will get replaced.
You can try using Three slashes, same has worked for me . Like \\\, .
I hope that works
I want to use Regex to find an S3 directory path in AWS Data Pipeline.
This is for an S3 Data Node. And then I will do a Redshift Copy from S3 to a Redshift table.
Example S3 path: S3://foldername/hh=10
Can you we use Regex to find hh=##, where ## could be any number from 0-24.
The goal is to copy all the files in folders where the name is hh=1, hh=2, hh=3, etc. (hh is hour)
Here's a bit of regex that will capture the last 1 or 2 digits after 'hh=', at the end of the line.
/hh=(\d{1,2})$/