In order to get a particular file from S3, I use, the script shown below:
# Get the configuration file
outputfilecfg=XXXX
amzFilecfg=XXXX
bucket=XXXX
resource="/${bucket}/${amzFilecfg}"
contentType="text/plain"
dateValue=`date -R`
stringToSigncfg="GET\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=$S3_KEY
s3Secret=$S3_SECRET
signature=`echo -en ${stringToSigncfg} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${amzFilecfg} -o $outputfilecfg
Now I want to be able to get the value of
the object metadata as specified by the S3 docs(https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html). I want to be able to do this exclusively through curl and not the aws-cli. Is this possible?
You can get just the object metadata by making a HEAD request instead of a GET request. To make a HEAD request in cURL, use the -I option.
curl -I -H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${amzFilecfg} -o $outputfilecfg
For more details about either of these, see
S3 documentation for the HEAD Object API
cURL manual
Related
I created an API with AppSync. Now I want to call it with curl, and I get the following error: You are not authorized to make this call.
I guessed the following:
curl -g -X POST -H "Content-Type: application/json" -H "Authorization: Bearer da2-XXXXXXXXXXXXXXXXXXXXXXXXXX" -d '{"query":"listMyModelTypes{listMyModelTypes {items {id title}}}"}' https://wuw4mcnvautpl4v5ox33fdzoq.appsync-api.us-east-1.amazonaws.com/graphql
Or should I also include the API ID somewhere in the query?
Making an Appsync query via CURL or Postman depends on getting the request body and headers right. The required headers depend on auth type.
# common variables
API_URL='https://<APPSYNC-ID>.appsync-api.eu-west-1.amazonaws.com/graphql'
QUERY='query GetImages($t: String!) { images(topic:$t) { edges { cursor } } }'
VARIABLES='{"t":"cats"}' # no spaces!
API Key Auth: x-api-key header
API_KEY='da2-XXXXXXXXXXXXXXXXXXXXXXXXXX'
curl -s -XPOST -H "Content-Type:application/graphql" -H "x-api-key:$API_KEY" -d '{"query": "'"$QUERY"'", "variables": '$VARIABLES'}' $API_URL
Token-based Auth (e.g. Cognito): Authorization and host headers
TOKEN='<YOUR JWT AUTH TOKEN HERE>'
HOST='<APPSYNC-ID>.appsync-api.eu-west-1.amazonaws.com'
curl -s -XPOST -H "Content-Type:application/graphql" -H "Authorization:$TOKEN" -H "host:$HOST" -d '{"query": "'"$QUERY"'", "variables": '$VARIABLES'}' $API_URL
I made a bash script that downloads a file from an amazon s3 bucket and then upload it back with some transformations on it.
It works fine manually but as soon as I put it on the crontab I either have no file that is downloaded or an empty file.
I get this error :
curl: (56) Received HTTP code 407 from proxy after CONNECT
I am using this code for my process :
#!/bin/sh
outputFile="PATH"
amzFile="AMAZON_FILE_PATH"
bucket="BUCKET"
resource="/${bucket}/${amzFile}"
contentType="application/x-gzip"
dateValue=`date -R`
stringToSign="GET\n\n${contentType}\n${dateValue}\n${resource}"
s3Key="S3_KEY"
s3Secret="S3SECRET"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${amzFile} -o $outputFile
Any of you guys has an idea ?
I would like to add a new version of a secret via GCP REST API.
Sadly the docs are pretty bland for REST and not even the URLs are spelled out.
I get a response for:
curl -H "authorization: Bearer $(gcloud auth print-access-token)" 'https://secretmanager.googleapis.com/v1beta1/projects/myproject/secrets/foo'
but only 404 for:
curl -H "authorization: Bearer $(gcloud auth print-access-token)" -H 'content-type: application/json' -d '{"payload":{"data":"foo"}}' 'https://secretmanager.googleapis.com/v1beta1/projects/myproject/secrets/foo/addVersion'
Also tried other permutations.
Can anyone tell me how to construct the REST call to add a new version?
Under the Adding a secret version section of the documentation, you can click on the "API" tab and see:
$ curl "https://secretmanager.googleapis.com/v1/projects/PROJECT_ID/secrets/SECRET_ID:addVersion" \
--request "POST" \
--header "authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json" \
--header "x-goog-user-project: project-id" \
--data "{\"payload\": {\"data\": \"${SECRET_DATA}\"}}"
Where:
PROJECT_ID is your GCP project ID
SECRET_ID is the name of the secret for which you want to add a version
SECRET_DATA is the base64-encoded secret.
If you pop out the API Explorer the start showing you the actual URL. So it is:
https://secretmanager.googleapis.com/v1beta1/projects/myproject/secrets/foo:addVersion
I am using below command to update the label of a GCP Cloud function which is already deployed.
$ gcloud functions deploy GCFunction --update-labels env=dev,app=myapp
Deploying function (may take a while - up to 2 minutes)...failed.
It looks it does a deploy when we try to change the label for existing functions . Can we do a label change without doing any deployment like any other API or Cloud function to do the same task.
It works.
PROJECT=[[YOUR-PROJECT]]
REGION=[[YOUR-REGION]]
FUNCTION=[[YOUR-FUNCTION]]
ENDPOINT="https://cloudfunctions.googleapis.com/v1"
NAME="projects/${PROJECT}/locations/${REGION}/functions/${FUNCTION}"
URL="${ENDPOINT}/${NAME}"
gcloud functions describe ${FUNCTION} \
--project=${PROJECT} \
--region=${REGION} \
--format="yaml(labels)"
labels:
app: myapp
deployment-tool: cli-gcloud
env: dev
curl \
--request PATCH \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
--header "content-type: application/json" \
--data "{\"labels\":{\"env\":\"testing\"}}" \
${URL}?updateMask=labels
gcloud functions describe ${FUNCTION} \
--project=${PROJECT} \
--region=${REGION} \
--format="yaml(labels)"
labels:
env: testing
NOTE You need to duplicate labels that you wish to preserve. In my example, I did not duplicate app and it is deleted by the PATCH.
NOTE The response body is an async Operation so you'll need to check on its completion.
Update: Operations
If you have the most excellent jq installed (or similar JSON parser), then you can poll the operation's status until it completes (better yet, set a timeout too... for the reader).
ENDPOINT="https://cloudfunctions.googleapis.com/v1"
NAME="projects/${PROJECT}/locations/${REGION}/functions/${FUNCTION}"
URL="${ENDPOINT}/${NAME}"
TOKEN=$(gcloud auth print-access-token)
VALUE="full-testing"
DATA="{\"labels\":{\"env\":\"${VALUE}\"}}"
NAME=$(curl \
--silent \
--request PATCH \
--header "Authorization: Bearer ${TOKEN}" \
--header "content-type: application/json" \
--data "${DATA}" \
${URL}?updateMask=labels |\
jq -r .name) && echo ${NAME}
URL="${ENDPOINT}/${NAME}"
while [ $(curl --silent --request GET --header "Authorization: Bearer ${TOKEN}" ${URL} | jq -r .done) != "true" ]
do
printf "."
sleep 15s
done
gcloud functions describe ${FUNCTION} \
--project=${PROJECT} \
--region=${REGION} \
--format="yaml(labels)"
I was unable to find gcloud functions operations implemented.
I am new to S3, we need to move a Folder present in one container to another using CURL command. Both the containers have access to single key.I am trying to write a sample code:
container=container_source // This is my Source container
resource="https://container_source.****.***.com/Folder1/"
contentType="application/octet-stream"
dateValue=`date -R`
stringToSign="COPY\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=b12***********
s3Secret=7************************
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
nohup curl -X COPY -T "container_source.****.***.com/Folder1/" \
-H "Host: ${container}.****.***.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://container_dest.****.***.com/Folder1