I have a REST API with 2 endpoints: 1 handles the POST request to upload to S3 and the other serves files for the GET request which downloads the file from the bucket and serves it to the client. Now:
I am getting an access denied response when I try to download the file from S3 after 24 hours of upload although I was able to retrieve the file on the same day. Is there something that needs to be configured to prevent this? Does it relate to something like this https://medium.com/driven-by-code/on-the-importance-of-correct-headers-and-metadata-in-s3-origins-for-aws-cloudfront-50da2f9370ae?
when the download is initiated to get the file from S3, I get a time out error after 30 sec. The server is configure with a WriteTimeout of 15 minutes. How can one prevent this timeout? Note: This is not a lambda function or any type of serverless function. I am not using CloudFront either (not at my knowledge). Here is a snippet code for the GET request.
buf := aws.NewWriteAtBuffer([]byte{})
s3Svc := s3.New(s)
downloader := s3manager.NewDownloaderWithClient(s3Svc, func(d *s3manager.Downloader) {
d.PartSize = 64 * MB
d.Concurrency = 3
})
result, err := downloader.Download(
buf, &s3.GetObjectInput{
Bucket: aws.String(rc.AWS.Bucket),
Key: aws.String(fileName),
})
if err != nil {
return nil, err
}
d := bytes.NewReader(buf.Bytes())
w.Header().Set("Content-Disposition", "filename="OriginalFileName)
http.ServeContent(w, r, filename, time.Now(), d)
Related
I am using AWS S3 service to upload images. Yesterday I updated the SDK v1 to v2 and found that the image upload is failing with the following error:
operation error S3: PutObject, https response error StatusCode: 403, RequestID: XXXXXXXXXXX, HostID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
UPDATED:
I have aws credentials on my home folder in linux in .aws folder in the following format:
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx
Here is the code:
package main
import (
"context"
"fmt"
"io"
"net/http"
"github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
fileName := "test123.jpg"
filePath := "/BUCKET_NAME/uploads/aman/2021/6/25/"
res, err := http.Get("https://images.app.goo.gl/mpQ5nXYXjdUMKGgW7")
if err != nil || res.StatusCode != 200 {
// handle errors
}
defer res.Body.Close()
UploadFileInS3Bucket(res.Body, fileName, filePath)
}
func UploadFileInS3Bucket(file io.Reader, fileName, filePath string) {
cfg, err := awsconfig.LoadDefaultConfig(context.TODO(),
awsconfig.WithRegion("REGION"),
)
client := s3.NewFromConfig(cfg)
uploader := manager.NewUploader(client)
uploadResp, err := uploader.Upload(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String(filePath),
Key: aws.String(fileName),
Body: file,
ContentType: aws.String("image"),
})
fmt.Println(uploadResp)
fmt.Println(err)
}
I did not change any credentials/buckets/regions in my code.However if I run the code with SDK v1 then it works fine & images are uploading.
What is going wrong with the SDK v2 ?
After spending a couple of days, I came to know that SDK V2 takes following format for Bucket & Key field:
fileName := "uploads/2021/6/25/test123.jpg"
filePath := "BUCKET_NAME"
Basically for these fields there is vice versa behaviour in SDK V1 & V2. Above is the V2. Below is the V1:
fileName := "test123.jpg"
filePath := "/BUCKET_NAME/uploads/2021/6/25/"
A help because are not writing the file to the S3 bucket
What did I do:
import time
import boto3
query = 'SELECT * FROM db_lambda.tb_inicial limit 10'
DATABASE = 'db_lambda'
output = 's3: // bucket-lambda-test1 / result /'
def lambda_handler (event, context):
client = boto3.client ('athena')
# Execution
response = client.start_query_execution (
QueryString = query,
QueryExecutionContext = {
Database: DATABASE
},
ResultConfiguration = {
'OutputLocation': output,
}
)
return response
return
IAM role created with:
AmazonS3FullAccess
AmazonAthenaFullAccess
CloudWatchLogsFullAccess
AmazonVPCFullAccess
AWSLambda_FullAccess
When running Lambda message:
Response:
{
"statusCode": 200,
"body": "\" Hello from Lambda! \ ""
}
Request ID:
"f2dd5cd2-070c-41ea-939f-d4909ce39fd0"
Function logs:
START RequestId: f2dd5cd2-070c-41ea-939f-d4909ce39fd0 Version: $ LATEST
END RequestId: f2dd5cd2-070c-41ea-939f-d4909ce39fd0
REPORT RequestId: f2dd5cd2-070c-41ea-939f-d4909ce39fd0 Duration: 0.84 ms Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 52 MB
How I did the test:
Configure test event
A function can have a maximum of 10 test events. The events are maintained, so that you can change your computer or web browser and test the function with the same events.
Create new test event
Edit saved test events
Test event saved
{
}
The "Hello from Lambda" message is the default code in a Lambda function. It would appear that you did not click 'Deploy' before testing the function. Clicking Deploy will save the Lambda code.
Also, once you get it running, please note that start_query_execution() will simply start the Athena query. You will need to use get_query_results() to obtain the results.
I have an AWS S3 bucket configured with sftp. I am using WinScp to copy data from server to S3 bucket. So I am uploading a 600 Gb file using WinScp. But after the upload is complete the file size on S3 is showing only 0 bytes. I did not get any error messages while copying. Does anyone know the solution?
For those who are using Golang, commenting the following line resolved my issue.
// Open the file from the file path
upFile, err := os.Open(imageFile)
if err != nil {
return fmt.Errorf("could not open local filepath [%v]: %+v", imageFile, err)
}
defer upFile.Close()
// Get the file info
// upFileInfo, _ := upFile.Stat()
// var fileSize int64 = upFileInfo.Size()
// fileBuffer := make([]byte, fileSize)
// upFile.Read(fileBuffer) // -->> This line was the issue.
Hopefully this will resolve your issue too.
I'm trying AutoML Vision of ML Codelabs on Cloud Healthcare API GitHub tutorials.
https://github.com/GoogleCloudPlatform/healthcare/blob/master/imaging/ml_codelab/breast_density_auto_ml.ipynb
I run the Export DICOM data cell code of Convert DICOM to JPEG section and the request as well as all the premise cell code succeeded.
But waiting for operation completion is timed out and never finish.
(ExportDicomData request status on Dataset page stays "Running" over the day. I did many times but all the requests were stacked staying "Running". A few times I tried to do from scratch and the results were same.)
I did so far:
1) Remove "output_config" since INVALID ARGUMENT error occurs.
https://github.com/GoogleCloudPlatform/healthcare/issues/133
2) Enable Cloud Resource Manager API since it is needed.
This is the cell code.
# Path to export DICOM data.
dicom_store_url = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores', dicom_store_id)
path = dicom_store_url + ":export"
# Headers (send request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
# output_config = {'output_config': {'gcs_destination': {'uri_prefix': jpeg_folder, 'mime_type': 'image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50'}}}
output_config = {'gcs_destination': {'uri_prefix': jpeg_folder, 'mime_type': 'image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50'}}
body = json.dumps(output_config)
resp, content = http.request(path, method='POST', headers=headers, body=body)
assert resp.status == 200, 'error exporting to JPEG, code: {0}, response: {1}'.format(resp.status, content)
print('Full response:\n{0}'.format(content))
# Record operation_name so we can poll for it later.
response = json.loads(content)
operation_name = response['name']
This is the result of waiting.
Waiting for operation completion...
Full response:
{
"name": "projects/my-datalab-tutorials/locations/us-central1/datasets/sample-dataset/operations/18300485449992372225",
"metadata": {
"#type": "type.googleapis.com/google.cloud.healthcare.v1beta1.OperationMetadata",
"apiMethodName": "google.cloud.healthcare.v1beta1.dicom.DicomService.ExportDicomData",
"createTime": "2019-08-18T10:37:49.809136Z"
}
}
AssertionErrorTraceback (most recent call last)
<ipython-input-18-1a57fd38ea96> in <module>()
21 timeout = time.time() + 10*60 # Wait up to 10 minutes.
22 path = os.path.join(HEALTHCARE_API_URL, operation_name)
---> 23 _ = wait_for_operation_completion(path, timeout)
<ipython-input-18-1a57fd38ea96> in wait_for_operation_completion(path, timeout)
15
16 print('Full response:\n{0}'.format(content))
---> 17 assert success, "operation did not complete successfully in time limit"
18 print('Success!')
19 return response
AssertionError: operation did not complete successfully in time limit
API Version is v1beta1.
I was wondering if somebody has any suggestion.
Thank you.
After several times kept trying and stayed running one night, it finally succeeded. I don't know why.
There was a recent update to the codelabs. The error message is due to the timeout in the codelab and not the actual operation. This has been addressed in the update. Please let me know if you are still running into any issues!
I have written a lambda function that executes another exe file named abc.exe.
Now I have created a zip of lambda function and uploaded it to aws. I am not sure where to put my "abc.exe"
I tried putting it in the same zip but I get below error:
exec: "abc": executable file not found in $PATH:
Here is my lambda function code:
func HandleLambdaEvent(request Request) (Response, error) {
fmt.Println("Input", request.Input)
fmt.Println("Output", request.Output)
cmd := exec.Command("abc", "-v", "--lambda", request.Input, "--out", request.Output)
var out bytes.Buffer
var stderr bytes.Buffer
cmd.Stdout = &out
cmd.Stderr = &stderr
err := cmd.Run()
if err != nil {
fmt.Println(fmt.Sprint(err) + ": " + stderr.String())
return Response{Message: fmt.Sprintf(stderr.String())}, nil
}
fmt.Println("Result: " + out.String())
return Response{Message: fmt.Sprintf(" %s and %s are input and output", request.Input, request.Output)}, nil
}
Update:
Trial 1:
I uploaded abc.exe to s3 then in my HandleLambdaEvent function I am downloading it to tmp/ folder. And next when i try to access it after successful download, it shows below error:
fork/exec /tmp/abc: no such file or directory:
Code to download abc.exe :
file, err2 := os.Create("tmp/abc.exe")
if err2 != nil {
fmt.Println("Unable to create file %q, %v", err2)
}
defer file.Close()
sess, _ := session.NewSession(&aws.Config{
Region: aws.String(region)},
)
downloader := s3manager.NewDownloader(sess)
numBytes, err2 := downloader.Download(file,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String("abc.exe"),
})
if err2 != nil {
fmt.Println("Unable to download item %q, %v", fileName, err2)
}
fmt.Println("Downloaded", file.Name(), numBytes, "bytes")
file.Close()
are you sure you even can execute an external binary? That seems counter-intuitive to me, like it violates the point of Lambda
Perfectly acceptable. Have a look at Running Arbitrary Executables in AWS Lambda on the AWS Compute Blog.
I am not sure where to put my "abc.exe"
To run executables on Lambda package them in the ZIP file you upload. Then do something like
exec.Command(path.Join(os.GetEnv("LAMBDA_TASK_ROOT"), "abc.exe"))
What sort of file is the .exe file? Is it a Windows app?
You won't be able to run Windows apps on Lambda. The linked blog post says: If you compile your own binaries, ensure that they’re either statically linked or built for the matching version of Amazon Linux