When I try to upload content to an Amazon S3 bucket, I get an AmazonClientException: Data read has a different length than the expected.
Here is my code.
public Object uploadFile(MultipartFile file) {
String fileName = System.currentTimeMillis() + "_" + file.getOriginalFilename();
log.info("uploadFile-> starting file upload " + fileName);
Path path = Paths.get(file.getOriginalFilename());
File fileObj = new File(file.getOriginalFilename());
try (FileOutputStream os = new FileOutputStream(fileObj)) {
os.write(file.getBytes());
os.close();
String uploadFilePath = bucketName + "/" + uploadPath;
s3Client.putObject(new PutObjectRequest(uploadFilePath, fileName, fileObj));
Files.delete(path);
} catch (IOException ex) {
log.error("error [" + ex.getMessage() + "] occurred while uploading [" + fileName + "] ");
}
log.info("uploadFile-> file uploaded process completed at: " + LocalDateTime.now() + " for - " + fileName);
return "File uploaded : " + fileName;
}
Amazon recommends using the Amazon S3 Java V2 API over use of V1.
The AWS SDK for Java 2.x is a major rewrite of the version 1.x code base. It’s built on top of Java 8+ and adds several frequently requested features. These include support for non-blocking I/O and the ability to plug in a different HTTP implementation at run time.
To upload content to an Amazon S3 bucket, use this V2 code.
public static String putS3Object(S3Client s3,
String bucketName,
String objectKey,
String objectPath) {
try {
Map<String, String> metadata = new HashMap<>();
metadata.put("myVal", "test");
PutObjectRequest putOb = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.metadata(metadata)
.build();
PutObjectResponse response = s3.putObject(putOb,
RequestBody.fromBytes(getObjectFile(objectPath)));
return response.eTag();
} catch (S3Exception e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
Full example here.
If you are not familiar with V2, please refer to this doc topic:
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
AWS Java SDK 1.x
s3.putObject(bucketName, key, new File(filePath));
AWS Java SDK 2.X
PutObjectRequest putObjectRequest = PutObjectRequest
.builder()
.bucket(bucketName)
.key(key)
.build();
s3Client.putObject(putObjectRequest, Paths.get(filePath));
Related
I want to add data into kinesis using Sprint Boot Application and React. I am a complete beginner when it comes to Kinesis, AWS, etc. so a beginner friendly guide would be appriciated.
To add data records into an Amazon Kinesis data stream from a Spring BOOT app, you can use the AWS SDK for Java V2 and specifically the Amazon Kinesis Java API. You can use the software.amazon.awssdk.services.kinesis.KinesisClient.
Because you are a beginner, I recommend that you read the AWS SDK Java V2 Developer Guide to become familiar with how to work with this Java API. See Developer guide - AWS SDK for Java 2.x.
Here is a code example that shows you how to add data records using this Service Client. See Github that has the other required classes here.
package com.example.kinesis;
//snippet-start:[kinesis.java2.putrecord.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.PutRecordRequest;
import software.amazon.awssdk.services.kinesis.model.KinesisException;
import software.amazon.awssdk.services.kinesis.model.DescribeStreamRequest;
import software.amazon.awssdk.services.kinesis.model.DescribeStreamResponse;
//snippet-end:[kinesis.java2.putrecord.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class StockTradesWriter {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <streamName>\n\n" +
"Where:\n" +
" streamName - The Amazon Kinesis data stream to which records are written (for example, StockTradeStream)\n\n";
if (args.length != 1) {
System.out.println(usage);
System.exit(1);
}
String streamName = args[0];
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create())
.build();
// Ensure that the Kinesis Stream is valid.
validateStream(kinesisClient, streamName);
setStockData( kinesisClient, streamName);
kinesisClient.close();
}
// snippet-start:[kinesis.java2.putrecord.main]
public static void setStockData( KinesisClient kinesisClient, String streamName) {
try {
// Repeatedly send stock trades with a 100 milliseconds wait in between
StockTradeGenerator stockTradeGenerator = new StockTradeGenerator();
// Put in 50 Records for this example
int index = 50;
for (int x=0; x<index; x++){
StockTrade trade = stockTradeGenerator.getRandomTrade();
sendStockTrade(trade, kinesisClient, streamName);
Thread.sleep(100);
}
} catch (KinesisException | InterruptedException e) {
System.err.println(e.getMessage());
System.exit(1);
}
System.out.println("Done");
}
private static void sendStockTrade(StockTrade trade, KinesisClient kinesisClient,
String streamName) {
byte[] bytes = trade.toJsonAsBytes();
// The bytes could be null if there is an issue with the JSON serialization by the Jackson JSON library.
if (bytes == null) {
System.out.println("Could not get JSON bytes for stock trade");
return;
}
System.out.println("Putting trade: " + trade);
PutRecordRequest request = PutRecordRequest.builder()
.partitionKey(trade.getTickerSymbol()) // We use the ticker symbol as the partition key, explained in the Supplemental Information section below.
.streamName(streamName)
.data(SdkBytes.fromByteArray(bytes))
.build();
try {
kinesisClient.putRecord(request);
} catch (KinesisException e) {
e.getMessage();
}
}
private static void validateStream(KinesisClient kinesisClient, String streamName) {
try {
DescribeStreamRequest describeStreamRequest = DescribeStreamRequest.builder()
.streamName(streamName)
.build();
DescribeStreamResponse describeStreamResponse = kinesisClient.describeStream(describeStreamRequest);
if(!describeStreamResponse.streamDescription().streamStatus().toString().equals("ACTIVE")) {
System.err.println("Stream " + streamName + " is not active. Please wait a few moments and try again.");
System.exit(1);
}
}catch (KinesisException e) {
System.err.println("Error found while describing the stream " + streamName);
System.err.println(e);
System.exit(1);
}
}
// snippet-end:[kinesis.java2.putrecord.main]
}
I am trying to upload a file to S3 Bucket using Spring Boot and spring-cloud-starter-aws.
It works fine in my local, but on being uploaded to EBS, the files are not getting uploaded.
The code:
public String uploadFileToS3Bucket(MultipartFile file, String path, boolean enablePublicReadAccess) {
File fileObj = convertMultiPartFileToFile(file);
String fileName = System.currentTimeMillis() + "_" + file.getOriginalFilename();
fileName = path +fileName;
try {
s3Client.putObject(new PutObjectRequest(bucketName, fileName, fileObj));
fileObj.delete();
}catch (Exception e) {
System.out.println("File Upload Failed due to the following: "+ e);
}
System.out.println("File uploaded : " + fileName);
return fileName;
}
convertMultiPartFileToFile method:
private File convertMultiPartFileToFile(MultipartFile file) {
Random rnd = new Random();
String randomVal = String.valueOf(100000 + rnd.nextInt(900000));
String fileName = randomVal + "-" + file.getOriginalFilename();
System.out.println("upload file name test:::: " + fileName);
String filePath = "";
logger.info(env.getProperty("testing"));
File convertedFile = null;
try {
convertedFile = new File(filePath + fileName);
System.out.println("The converted file: "+ convertedFile);
convertedFile.createNewFile();
FileOutputStream fos = new FileOutputStream(convertedFile);
fos.write(file.getBytes());
fos.close();
} catch (Exception e) {
System.out.println("Error converting multipartFile to file:" + e);
}
return convertedFile;
}
In the log file I get the below error:
Error converting multipartFile to file:java.io.IOException: Permission denied
File Upload Failed due to the following: com.amazonaws.SdkClientException: Unable to
calculate MD5 hash: 746350-about_us_work.jpeg (No such file or directory)
The EBS IAM Role has S3FullAccess.
It works perfectly from the local, but not so in the server. Any ideas what could be the issue?
Solved it using reference from Unable to calculate MD5 : AWS S3 bucket
Updated Code looks like this:
public String uploadFileToS3Bucket(MultipartFile file, String path, boolean enablePublicReadAccess) {
File fileObj = convertMultiPartFileToFile(file);
String fileName = System.currentTimeMillis() + "_" + file.getOriginalFilename();
fileName = path +fileName;
try {
InputStream is=file.getInputStream();
s3Client.putObject(new PutObjectRequest(bucketName, fileName, is, new ObjectMetadata()));
fileObj.delete();
}catch (Exception e) {
System.out.println("File Upload Failed due to the following: "+ e);
}
System.out.println("File uploaded : " + fileName);
return fileName;
}
I don't understand the need for fileObj.delete(). I tried to upload a file with the following code and works:
public void uploadFileToS3Bucket(MultipartFile file) throws IOException {
s3client.putObject(
new PutObjectRequest(
bucketName, file.getOriginalFilename(), file.getInputStream(), new ObjectMetadata()));
}
I have been recently involved in a project where I have to leverage the QuickSight APIs and update a dashboard programmatically. I can perform all the other actions but I am unable to update the dashboard from a template. I have tried a couple of different ideas, but all in vain.
Is there anyone who has already worked with the UpdateDashboard API or point me to some detailed documentation where I can understand if I am actually missing anything?
Thanks.
I got this to work using the AWS QuickSight Java V2 API. TO make this work, you need to follow the quick start instructions here:
https://docs.aws.amazon.com/quicksight/latest/user/getting-started.html
You need to get these values:
account - your account number
dashboardId - the dashboard id value
dataSetArn -- the data set ID value
analysisArn - the analysis Arn value
Once you go through the above topics - you will have all of these resource and ready to call UpdateDashboard . Here is the Java example that updates a Dashboard.
package com.example.quicksight;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.quicksight.QuickSightClient;
import software.amazon.awssdk.services.quicksight.model.*;
/*
Before running this code example, follow the Getting Started with Data Analysis in Amazon QuickSight located here:
https://docs.aws.amazon.com/quicksight/latest/user/getting-started.html
This code example uses resources that you created by following that topic such as the DataSet Arn value.
*/
public class UpdateDashboard {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage: UpdateDashboard <account> <dashboardId> <>\n\n" +
"Where:\n" +
" account - the account to use.\n\n" +
" dashboardId - the dashboard id value to use.\n\n" +
" dataSetArn - the ARN of the dataset.\n\n" +
" analysisArn - the ARN of an existing analysis";
String account = "<account id>";
String dashboardId = "<dashboardId>";
String dataSetArn = "<dataSetArn>";
String analysisArn = "<Analysis Arn>";
QuickSightClient qsClient = QuickSightClient.builder()
.region(Region.US_EAST_1)
.build();
try {
DataSetReference dataSetReference = DataSetReference.builder()
.dataSetArn(dataSetArn)
.dataSetPlaceholder("Dataset placeholder2")
.build();
// Get a template ARN to use.
String arn = getTemplateARN(qsClient, account, dataSetArn, analysisArn);
DashboardSourceTemplate sourceTemplate = DashboardSourceTemplate.builder()
.dataSetReferences(dataSetReference)
.arn(arn)
.build();
DashboardSourceEntity sourceEntity = DashboardSourceEntity.builder()
.sourceTemplate(sourceTemplate)
.build();
UpdateDashboardRequest dashboardRequest = UpdateDashboardRequest.builder()
.awsAccountId(account)
.dashboardId(dashboardId)
.name("UpdateTest")
.sourceEntity(sourceEntity)
.themeArn("arn:aws:quicksight::aws:theme/SEASIDE")
.build();
UpdateDashboardResponse response = qsClient.updateDashboard(dashboardRequest);
System.out.println("Dashboard " + response.dashboardId() + " has been updated");
} catch (QuickSightException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
private static String getTemplateARN(QuickSightClient qsClient, String account, String dataset, String analysisArn) {
String arn = "";
try {
DataSetReference setReference = DataSetReference.builder()
.dataSetArn(dataset)
.dataSetPlaceholder("Dataset placeholder2")
.build();
TemplateSourceAnalysis templateSourceAnalysis = TemplateSourceAnalysis.builder()
.dataSetReferences(setReference)
.arn(analysisArn)
.build();
TemplateSourceEntity sourceEntity = TemplateSourceEntity.builder()
.sourceAnalysis(templateSourceAnalysis)
.build();
CreateTemplateRequest createTemplateRequest = CreateTemplateRequest.builder()
.awsAccountId(account)
.name("NewTemplate")
.sourceEntity(sourceEntity)
.templateId("a9a277fb-7239-4890-bc7a-8a3e82d67a37") // Specify a GUID value
.build();
CreateTemplateResponse response = qsClient.createTemplate(createTemplateRequest);
arn = response.arn();
} catch (QuickSightException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return arn;
}
}
I have a S3 bucket xxx. I wrote one lambda function to access data from s3 bucket and writing those details to a RDS PostgreSQL instance. I can do it with my code. I added one trigger to the lambda function for invoking the same when a file falls on s3.
But from my code I can only read file having name 'sampleData.csv'. consider my code given below
public class LambdaFunctionHandler implements RequestHandler<S3Event, String> {
private AmazonS3 s3 = AmazonS3ClientBuilder.standard().build();
public LambdaFunctionHandler() {}
// Test purpose only.
LambdaFunctionHandler(AmazonS3 s3) {
this.s3 = s3;
}
#Override
public String handleRequest(S3Event event, Context context) {
context.getLogger().log("Received event: " + event);
String bucket = "xxx";
String key = "SampleData.csv";
System.out.println(key);
try {
S3Object response = s3.getObject(new GetObjectRequest(bucket, key));
String contentType = response.getObjectMetadata().getContentType();
context.getLogger().log("CONTENT TYPE: " + contentType);
// Read the source file as text
AmazonS3 s3Client = new AmazonS3Client();
String body = s3Client.getObjectAsString(bucket, key);
System.out.println("Body: " + body);
System.out.println();
System.out.println("Reading as stream.....");
System.out.println();
BufferedReader br = new BufferedReader(new InputStreamReader(response.getObjectContent()));
// just saving the excel sheet data to the DataBase
String csvOutput;
try {
Class.forName("org.postgresql.Driver");
Connection con = DriverManager.getConnection("jdbc:postgresql://ENDPOINT:5432/DBNAME","USER", "PASSWORD");
System.out.println("Connected");
// Checking EOF
while ((csvOutput = br.readLine()) != null) {
String[] str = csvOutput.split(",");
String name = str[1];
String query = "insert into schema.tablename(name) values('"+name+"')";
Statement statement = con.createStatement();
statement.executeUpdate(query);
}
System.out.println("Inserted Successfully!!!");
}catch (Exception ase) {
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", key, bucket));
// throw ase;
}
return contentType;
} catch (Exception e) {
e.printStackTrace();
context.getLogger().log(String.format(
"Error getting object %s from bucket %s. Make sure they exist and"
+ " your bucket is in the same region as this function.", key, bucket));
throw e;
}
}
From my code you can see that I mentioned key="SampleData.csv"; is there any way to get the key inside a bucket without specifying a specific file name?
These couple of links would be of help.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingObjectKeysUsingJava.html
You can list objects using prefix and delimiter to find the key you are looking for without passing a specific filename.
If you need to get the event details on S3, you can actually enable the s3 event notifier to lambda function. Refer the link
You can enable this by,
Click on 'Properties' inside your bucket
Click on 'Events '
Click 'Add notification'
Give a name and select the type of event (eg. Put, delete etc.)
Give prefix and suffix if necessary or else leave blank which consider all events
Then 'Sent to' Lambda function and provide the Lambda ARN.
Now the event details will be sent lambda function as a json format. You can fetch the details from that json. The input will be like this:
{"Records":[{"eventVersion":"2.0","eventSource":"aws:s3","awsRegion":"ap-south-1","eventTime":"2017-11-23T09:25:54.845Z","eventName":"ObjectRemoved:Delete","userIdentity":{"principalId":"AWS:AIDAJASDFGZTLA6UZ7YAK"},"requestParameters":{"sourceIPAddress":"52.95.72.70"},"responseElements":{"x-amz-request-id":"A235BER45D4974E","x-amz-id-2":"glUK9ZyNDCjMQrgjFGH0t7Dz19eBrJeIbTCBNI+Pe9tQugeHk88zHOY90DEBcVgruB9BdU0vV8="},"s3":{"s3SchemaVersion":"1.0","configurationId":"sns","bucket":{"name":"example-bucket1","ownerIdentity":{"principalId":"AQFXV36adJU8"},"arn":"arn:aws:s3:::example-bucket1"},"object":{"key":"SampleData.csv","sequencer":"005A169422CA7CDF66"}}}]}
You can access the key as objectname = event['Records'][0]['s3']['object']['key'](Oops, this is for python)
and then sent this info to RDS.
We have some code that downloads a bunch of S3 files to a local directory. The list of files to retrieve is from a query we run. It only lists files that actually exist in our S3 bucket.
As we loop to retrieve these files, about 10% of them return a 404 error as if the file doesn't exist. I log out the name/location of that file, so I can go to S3 and check, and sure enough every single one of the IS ON S3 in the location we went looking for it.
Why does S3 throw a 404 when the file exists?
Here is the Groovy code of the script.
class RetrieveS3FilesFromCSVLoader implements Loader {
private static String missingFilesFile = "00-MISSED_FILES.csv"
private static String csvFileName = "/csv/s3file2.csv"
private static String saveFilesToLocation = "/tmp/retrieve/"
public static final char SEPARATOR = ','
#Autowired
DocumentFileService documentFileService
private void readWithCommaSeparatorSQL() {
int counter = 0
String fileName
String fileLocation
File missedFiles = new File(saveFilesToLocation + missingFilesFile)
PrintWriter writer = new PrintWriter(missedFiles)
File fileCSV = new File(getClass().getResource(csvFileName).toURI())
fileCSV.splitEachLine(SEPARATOR as String) { nextLine ->
//if (counter < 15) {
if (nextLine != null && (nextLine[0] != 'FileLocation')) {
counter++
try {
//Remove 0, only if client number start with "0".
fileLocation = nextLine[0].trim()
byte[] fileBytes = documentFileService.getFile(fileLocation)
if (fileBytes != null) {
fileName = fileLocation.substring(fileLocation.indexOf("/") + 1, fileLocation.length())
File file = new File(saveFilesToLocation + fileName)
file.withOutputStream {
it.write fileBytes
}
println "$counter) Wrote file ${fileLocation} to ${saveFilesToLocation + fileLocation}"
} else {
println "$counter) UNABLE TO RETRIEVE FILE ELSE: $fileLocation"
writer.println(fileLocation)
}
} catch (Exception e) {
println "$counter) UNABLE TO RETRIEVE FILE: $fileLocation"
println(e.getMessage())
writer.println(fileLocation)
}
} else {
counter++;
}
//}
}
writer.close()
}
Here is the code for getFile(fileLocation) and client creation.
public byte[] getFile(String filename) throws IOException {
AmazonS3Client s3Client = connectToAmazonS3Service();
S3Object object = s3Client.getObject(S3_BUCKET_NAME, filename);
if(object == null) {
return null;
}
byte[] fileAsArray = IOUtils.toByteArray(object.getObjectContent());
object.close();
return fileAsArray;
}
/**
* Connects to Amazon S3
*
* #return instance of AmazonS3Client
*/
private AmazonS3Client connectToAmazonS3Service() {
AWSCredentials credentials;
try {
credentials = new BasicAWSCredentials(S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY);
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
AmazonS3Client s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_EAST_1);
s3.setRegion(usWest2);
return s3;
}
The code above works for 90% of the files in the list passed to the script, but we know with fact that all 100% of the files exist in S3 and with the location String we are passing.
I am just an idiot. Thought it had the production AWS credentials in the properties file. Instead it was development credentials. So I had the wrong credentials.