I have an AWS S3 bucket configured with sftp. I am using WinScp to copy data from server to S3 bucket. So I am uploading a 600 Gb file using WinScp. But after the upload is complete the file size on S3 is showing only 0 bytes. I did not get any error messages while copying. Does anyone know the solution?
For those who are using Golang, commenting the following line resolved my issue.
// Open the file from the file path
upFile, err := os.Open(imageFile)
if err != nil {
return fmt.Errorf("could not open local filepath [%v]: %+v", imageFile, err)
}
defer upFile.Close()
// Get the file info
// upFileInfo, _ := upFile.Stat()
// var fileSize int64 = upFileInfo.Size()
// fileBuffer := make([]byte, fileSize)
// upFile.Read(fileBuffer) // -->> This line was the issue.
Hopefully this will resolve your issue too.
Related
Need to download a file from s3 bucket and store in to tmp directory in lambda function. After that need apply grep command into file using "os/exec". I tried to do this algorithm using golang.
I tried through following way but it is not successful approach.
func MyHandler(ctx context.Context, s3Event events.S3Event) {
for _, record := range s3Event.Records {
s3record := record.S3
bucketName := s3record.Bucket.Name
fileName := s3record.Object.Key
download_path := "/tmp/"
file, err := os.Create(download_path + fileName)
if err != nil {
fmt.Println(err)
}
defer file.Close()
sess, _ := session.NewSession(&aws.Config{Region: aws.String("us-east-1")})
downloader := s3manager.NewDownloader(sess)
numBytes, err := downloader.Download(file,
&s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(fileName),
})
if err != nil {
fmt.Println(err)
}
fmt.Println("Downloaded", file.Name(), numBytes, "bytes")
}
}
Please verify may approach correct or suggest a correct approach to store the file from s3 bucket into tem folder of lambda function everything I need to do using golang.
I have following confusion also
Do we need to create directory folder in lambda function or it is a default directory already there.
Can we apply os/exec command into tem folder and can we grep the file.
Do we need to delete uploaded file from tem folder then How? or The file will be deleted a automatically.
I am using AWS S3 service to upload images. Yesterday I updated the SDK v1 to v2 and found that the image upload is failing with the following error:
operation error S3: PutObject, https response error StatusCode: 403, RequestID: XXXXXXXXXXX, HostID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
UPDATED:
I have aws credentials on my home folder in linux in .aws folder in the following format:
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx
Here is the code:
package main
import (
"context"
"fmt"
"io"
"net/http"
"github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
fileName := "test123.jpg"
filePath := "/BUCKET_NAME/uploads/aman/2021/6/25/"
res, err := http.Get("https://images.app.goo.gl/mpQ5nXYXjdUMKGgW7")
if err != nil || res.StatusCode != 200 {
// handle errors
}
defer res.Body.Close()
UploadFileInS3Bucket(res.Body, fileName, filePath)
}
func UploadFileInS3Bucket(file io.Reader, fileName, filePath string) {
cfg, err := awsconfig.LoadDefaultConfig(context.TODO(),
awsconfig.WithRegion("REGION"),
)
client := s3.NewFromConfig(cfg)
uploader := manager.NewUploader(client)
uploadResp, err := uploader.Upload(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String(filePath),
Key: aws.String(fileName),
Body: file,
ContentType: aws.String("image"),
})
fmt.Println(uploadResp)
fmt.Println(err)
}
I did not change any credentials/buckets/regions in my code.However if I run the code with SDK v1 then it works fine & images are uploading.
What is going wrong with the SDK v2 ?
After spending a couple of days, I came to know that SDK V2 takes following format for Bucket & Key field:
fileName := "uploads/2021/6/25/test123.jpg"
filePath := "BUCKET_NAME"
Basically for these fields there is vice versa behaviour in SDK V1 & V2. Above is the V2. Below is the V1:
fileName := "test123.jpg"
filePath := "/BUCKET_NAME/uploads/2021/6/25/"
I have a REST API with 2 endpoints: 1 handles the POST request to upload to S3 and the other serves files for the GET request which downloads the file from the bucket and serves it to the client. Now:
I am getting an access denied response when I try to download the file from S3 after 24 hours of upload although I was able to retrieve the file on the same day. Is there something that needs to be configured to prevent this? Does it relate to something like this https://medium.com/driven-by-code/on-the-importance-of-correct-headers-and-metadata-in-s3-origins-for-aws-cloudfront-50da2f9370ae?
when the download is initiated to get the file from S3, I get a time out error after 30 sec. The server is configure with a WriteTimeout of 15 minutes. How can one prevent this timeout? Note: This is not a lambda function or any type of serverless function. I am not using CloudFront either (not at my knowledge). Here is a snippet code for the GET request.
buf := aws.NewWriteAtBuffer([]byte{})
s3Svc := s3.New(s)
downloader := s3manager.NewDownloaderWithClient(s3Svc, func(d *s3manager.Downloader) {
d.PartSize = 64 * MB
d.Concurrency = 3
})
result, err := downloader.Download(
buf, &s3.GetObjectInput{
Bucket: aws.String(rc.AWS.Bucket),
Key: aws.String(fileName),
})
if err != nil {
return nil, err
}
d := bytes.NewReader(buf.Bytes())
w.Header().Set("Content-Disposition", "filename="OriginalFileName)
http.ServeContent(w, r, filename, time.Now(), d)
I have written a lambda function that executes another exe file named abc.exe.
Now I have created a zip of lambda function and uploaded it to aws. I am not sure where to put my "abc.exe"
I tried putting it in the same zip but I get below error:
exec: "abc": executable file not found in $PATH:
Here is my lambda function code:
func HandleLambdaEvent(request Request) (Response, error) {
fmt.Println("Input", request.Input)
fmt.Println("Output", request.Output)
cmd := exec.Command("abc", "-v", "--lambda", request.Input, "--out", request.Output)
var out bytes.Buffer
var stderr bytes.Buffer
cmd.Stdout = &out
cmd.Stderr = &stderr
err := cmd.Run()
if err != nil {
fmt.Println(fmt.Sprint(err) + ": " + stderr.String())
return Response{Message: fmt.Sprintf(stderr.String())}, nil
}
fmt.Println("Result: " + out.String())
return Response{Message: fmt.Sprintf(" %s and %s are input and output", request.Input, request.Output)}, nil
}
Update:
Trial 1:
I uploaded abc.exe to s3 then in my HandleLambdaEvent function I am downloading it to tmp/ folder. And next when i try to access it after successful download, it shows below error:
fork/exec /tmp/abc: no such file or directory:
Code to download abc.exe :
file, err2 := os.Create("tmp/abc.exe")
if err2 != nil {
fmt.Println("Unable to create file %q, %v", err2)
}
defer file.Close()
sess, _ := session.NewSession(&aws.Config{
Region: aws.String(region)},
)
downloader := s3manager.NewDownloader(sess)
numBytes, err2 := downloader.Download(file,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String("abc.exe"),
})
if err2 != nil {
fmt.Println("Unable to download item %q, %v", fileName, err2)
}
fmt.Println("Downloaded", file.Name(), numBytes, "bytes")
file.Close()
are you sure you even can execute an external binary? That seems counter-intuitive to me, like it violates the point of Lambda
Perfectly acceptable. Have a look at Running Arbitrary Executables in AWS Lambda on the AWS Compute Blog.
I am not sure where to put my "abc.exe"
To run executables on Lambda package them in the ZIP file you upload. Then do something like
exec.Command(path.Join(os.GetEnv("LAMBDA_TASK_ROOT"), "abc.exe"))
What sort of file is the .exe file? Is it a Windows app?
You won't be able to run Windows apps on Lambda. The linked blog post says: If you compile your own binaries, ensure that they’re either statically linked or built for the matching version of Amazon Linux
I am trying to upload an image to AWS S3.
The web app runs in my local desktop in tomcat server.
When I upload the image from server, I see the file details in http request multipart file , I'm able to get its size and details.
This is how I set up connection
File convFile = new File( file.getOriginalFilename());
file.transferTo(convFile);
AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_WEST_2) //regionName is a string for a region not supported by the SDK yet
.withCredentials(new AWSStaticCredentialsProvider
(new BasicAWSCredentials("key", "accessId")))
// .setEndpointConfiguration(new EndpointConfiguration("https://s3.console.aws.amazon.com", "us-west-1"))
.enablePathStyleAccess()
.disableChunkedEncoding()
.build();
s3.putObject(new PutObjectRequest(bucketName, "key", convFile));
I tried two methodologies.
1) Converting Multipart file to java.io.File and uploading
Error: com.amazonaws.SdkClientException: Unable to calculate MD5 hash: MyImage.png (No such file or directory)
2) Sending the image as bytestream
Error: I am getting java.io.FileNotFound Exception: /path/to/tomcat/MyImage.tmp not found
The actual image name is MyImage.png.
Either method I try, I get exception.
Ok. There were several issues.
I mis typed the Region for a different set of keys.
But still the issues was happening and I went back to 1.11.76 version. And still there were some issues and this is how I fixed.
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentType(file.getContentType());
byte[] contentBytes = null;
try {
InputStream is = file.getInputStream();
contentBytes = IOUtils.toByteArray(is);
} catch (IOException e) {
System.err.printf("Failed while reading bytes from %s", e.getMessage());
}
Long contentLength = Long.valueOf(contentBytes.length);
objectMetadata.setContentLength(contentLength);
objectMetadata.setHeader("filename", fileNameWithExtn);
/*
* Reobtain the tmp uploaded file as input stream
*/
InputStream inputStream = file.getInputStream();
File convFile = new File(fileNameWithExtn); //If i don't do //this, I think I was getting file not found or MD5 error.
file.transferTo(convFile);
FileUtils.copyInputStreamToFile(inputStream, convFile); //you //need to have commons.io in your pom.xml for this FileUtils to work. Not //the apache FileUtils.
AmazonS3 s3 = new AmazonS3Client(new AWSStaticCredentialsProvider
(new BasicAWSCredentials("<yourkeyId>", "<YourAccessKey>")));
s3.setRegion(Region.US_West.toAWSRegion());
s3.setEndpoint("yourRegion.amazonaws.com");
versionId = s3.putObject(new PutObjectRequest("YourBucketName", name, convFile)).getVersionId();