Using AWS S3 C++ SDK for uploading .jpg images to a certain IAM user introduce huge time delays that in any case are caused due to network traffic and latency issues. I am using free-tier S3 version and MSVC 2017 64bit for my application (on Windows 10 PC). Here is a sample code:
Aws::SDKOptions options;
Aws::InitAPI(options);
Aws::Client::ClientConfiguration config;
config.region = Aws::Region::US_EAST_2;
Aws::S3::S3Client s3_client(Aws::Auth::AWSCredentials(KEY,ACCESS_KEY), config);
const Aws::String bucket_name = BUCKET;
const Aws::String object_name = "image.jpg";
Aws::S3::Model::PutObjectRequest put_object_request;
put_object_request.SetBucket(bucket_name);
put_object_request.SetKey(object_name);
std::shared_ptr<Aws::IOStream> input_data =
Aws::MakeShared<Aws::FStream>("PutObjectInputStream",
"../image.jpg",
std::ios_base::in | std::ios::binary);
put_object_request.SetBody(input_data);
put_object_request.SetContentType("image/jpeg");
input_data->seekg(0LL, input_data->end);
put_object_request.SetContentLength(static_cast<long>(input_data->tellg()));
auto put_object_outcome = s3_client.PutObject(put_object_request);
When I upload images bigger than 100KB the total
PutObject(put_object_request);
time of execution exceeds 2min for a 520KB image.
I have tried the same example using Python boto3 and the total upload time for the same image is around 25s.
Have anyone faced the same issue?
After a better look to AWS github repo I figure out the issue.
The problem was that WinHttpSyncHttpClient was making timouts and reset upload activity internally thus not exiting the upload thread and finally aborting the transaction. By adding a custom timeout value the problem solved.
I used multipart Upload to re-implement the example as it seems more robust and manageable. Although I thought that it is unavailable for C++ SDK, its not the case as the TransferManager does the same job for C++ (not by using S3 headers as Java, .NET and PHP does).
Thanks KaibaLopez and SCalwas from AWS github repo who help me solve the issues (issue1, issue2). I am pasting an example code is case anyone face the same issue:
#include "pch.h"
#include <iostream>
#include <fstream>
#include <filesystem>
#include <aws/core/Aws.h>
#include <aws/core/auth/AWSCredentials.h>
#include <aws/s3/S3Client.h>
#include <aws/s3/model/Bucket.h>
#include <aws/transfer/TransferManager.h>
#include <aws/transfer/TransferHandle.h>
static const char* KEY = "KEY";
static const char* BUCKET = "BUCKET_NAME";
static const char* ACCESS_KEY = "AKEY";
static const char* OBJ_NAME = "img.jpg";
static const char* const ALLOCATION_TAG = "S3_SINGLE_OBJ_TEST";
int main()
{
Aws::SDKOptions options;
Aws::InitAPI(options);
{
Aws::Client::ClientConfiguration config;
config.region = Aws::Region::US_EAST_2;
config.requestTimeoutMs = 20000;
auto s3_client = std::make_shared<Aws::S3::S3Client>(Aws::Auth::AWSCredentials(KEY, ACCESS_KEY), config);
const Aws::String bucket_name = BUCKET;
const Aws::String object_name = OBJ_NAME;
const Aws::String key_name = OBJ_NAME;
auto s3_client_executor = Aws::MakeShared<Aws::Utils::Threading::DefaultExecutor>(ALLOCATION_TAG);
Aws::Transfer::TransferManagerConfiguration trConfig(s3_client_executor.get());
trConfig.s3Client = s3_client;
trConfig.uploadProgressCallback =
[](const Aws::Transfer::TransferManager*, const std::shared_ptr<const Aws::Transfer::TransferHandle>&transferHandle)
{ std::cout << "Upload Progress: " << transferHandle->GetBytesTransferred() <<
" of " << transferHandle->GetBytesTotalSize() << " bytes" << std::endl;};
std::cout << "File start upload" << std::endl;
auto tranfer_manager = Aws::Transfer::TransferManager::Create(trConfig);
auto transferHandle = tranfer_manager->UploadFile(object_name.c_str(),
bucket_name.c_str(), key_name.c_str(), "multipart/form-data", Aws::Map<Aws::String, Aws::String>());
transferHandle->WaitUntilFinished();
if(transferHandle->GetStatus() == Aws::Transfer::TransferStatus::COMPLETED)
std::cout << "File up" << std::endl;
else
std::cout << "Error uploading: " << transferHandle->GetLastError() << std::endl;
}
Aws::ShutdownAPI(options);
return 0;
}
Related
Can anyone tell me the answer, I have been unable to eat for a few days, thank you for being my benefactor
I'm using the mysql-connector-c++ 8.0 to mysql 8.0.x
I want to connect to a remote cloud database. After trying countless times, I have encountered great difficulties. Is there something wrong with my code? I am a newbie to msyql
The strange thing is that mysql - h xxx - root - p can be executed on the linux command
line, but it fails in c++ alone, and the error is always one:
CDK Error: Connection attempt to the server was aborted. Timeout of 10000 milliseconds was exceeded
#include <iostream>
#include <string>
#include <list>
#include <cstdlib>
#include <mysqlx/xdevapi.h>
using namespace mysqlx;
int main() {
try {
Session sess(SessionOption::USER, "root",
SessionOption::PWD, "123456",
SessionOption::HOST, "172.29.207.112",
//SessionOption::HOST, "rm-bp1qp1x588kzb49rf.mysql.rds.aliyuncs.com",
SessionOption::PORT, 33060,
SessionOption::DB, "demo");
auto result = sess.sql("select * from person").execute();
for (auto row : result.fetchAll()) {
std::cout << row[0] << " " << row[1] << "\n";
}
} catch (const std::exception& e) {
std::cerr << e.what() << '\n';
}
}
I finally know the answer. The reason is that the cloud database provider does not support 33060 of X Protocol. Currently, Alibaba Cloud does not support it. I learned this from the intelligent problem robot, but it is not mentioned in the document. Alibaba Cloud should update documentation! !
I downloaded and followed the example 1.
Moved to example 2 (Create stdout/stderr logger object) and got stuck. Actually I can run it as it is but if I change
spdlog::get("console") to spdlog::get("err_logger") it crashes.
Am I supposed to change it like that?
#include "spdlog/spdlog.h"
#include "spdlog/sinks/stdout_color_sinks.h"
void stdout_example()
{
// create color multi threaded logger
auto console = spdlog::stdout_color_mt("console");
auto err_logger = spdlog::stderr_color_mt("stderr");
spdlog::get("err_logger")->info("loggers can be retrieved from a global registry using the spdlog::get(logger_name)");
}
int main()
{
stdout_example();
return 0;
}
I also tried Basic file logger example:
#include <iostream>
#include "spdlog/sinks/basic_file_sink.h"
void basic_logfile_example()
{
try
{
auto logger = spdlog::basic_logger_mt("basic_logger", "logs/basic-log.txt");
}
catch (const spdlog::spdlog_ex &ex)
{
std::cout << "Log init failed: " << ex.what() << std::endl;
}
}
int main()
{
basic_logfile_example();
return 0;
}
And I see it creates basic-log.txt file but there is nothing on it.
Because you need to register err_logger logger first. There is no default err_logger as far as I know. spdlog::get() returns logger based on its registered name, not variable.
You need a code like this. Code is complex and you may not need all of it though:
#include "spdlog/sinks/stdout_color_sinks.h"
#include "spdlog/sinks/rotating_file_sink.h"
void multi_sink_example2()
{
spdlog::init_thread_pool(8192, 1);
auto stdout_sink = std::make_shared<spdlog::sinks::stdout_color_sink_mt >();
auto rotating_sink = std::make_shared<spdlog::sinks::rotating_file_sink_mt>("mylog.txt", 1024*1024*10, 3);
std::vector<spdlog::sink_ptr> sinks {stdout_sink, rotating_sink};
auto logger = std::make_shared<spdlog::async_logger>("err_logger", sinks.begin(), sinks.end(), spdlog::thread_pool(), spdlog::async_overflow_policy::block);
spdlog::register_logger(logger); //<-- this line registers logger for spdlog::get
}
and after this code, you can use spdlog::get("err_logger").
You can read about creating and registering loggers here.
I think spdlog::stderr_color_mt("stderr"); registers logger with name stderr so spdlog::get("stderr") may work, but have not tested myself.
I am trying to upload a file to S3 using multipart upload feature in AWS C++ SDK. I could find examples for JAVA, .NET, PHP, RUBY and Rest API, but didnt find any lead on how to do it in C++. Can you please provide me a direction to achieve the same.
transfer manager needs the file to be stored on disk and have a filename. That's not good for streaming. Here's the code template that I used to do prototyping.
#include <aws/core/Aws.h>
#include <aws/s3/S3Client.h>
#include <aws/s3/model/CreateMultipartUploadRequest.h>
#include <aws/core/utils/HashingUtils.h>
#include <aws/s3/model/CompleteMultipartUploadRequest.h>
#include <aws/s3/model/GetObjectRequest.h>
#include <aws/s3/model/UploadPartRequest.h>
#include <sstream>
#include <iostream>
#include <string>
using namespace Aws::S3::Model;
int main(int argc, char *argv[]) {
// new api
Aws::SDKOptions options;
Aws::InitAPI(options);
// use default credential provider chains
Aws::Client::ClientConfiguration clientConfiguration;
clientConfiguration.region = "<your-region>";
clientConfiguration.endpointOverride = "<endpoint-override>";
Aws::S3::S3Client s3_client(clientConfiguration);
std::string bucket = "<bucket>";
std::string key = "<key>";
// initiate the process
Aws::S3::Model::CreateMultipartUploadRequest create_request;
create_request.SetBucket(bucket.c_str());
create_request.SetKey(key.c_str());
create_request.SetContentType("text/plain");
auto createMultipartUploadOutcome =
s3_client.CreateMultipartUpload(create_request);
std::string upload_id = createMultipartUploadOutcome.GetResult().GetUploadId();
std::cout << "multiplarts upload id is:" << upload_id << std::endl;
// start upload
Aws::S3::Model::UploadPartRequest my_request;
my_request.SetBucket(bucket.c_str());
my_request.SetKey(key.c_str());
my_request.SetPartNumber(1);
my_request.SetUploadId(upload_id.c_str());
Aws::StringStream ss;
// just have a small chunk of data to verify everything works
ss << "to upload";
std::shared_ptr<Aws::StringStream> stream_ptr =
Aws::MakeShared<Aws::StringStream>("WriteStream::Upload" /* log id */, ss.str());
my_request.SetBody(stream_ptr);
Aws::Utils::ByteBuffer part_md5(
Aws::Utils::HashingUtils::CalculateMD5(*stream_ptr));
my_request.SetContentMD5(Aws::Utils::HashingUtils::Base64Encode(part_md5));
auto start_pos = stream_ptr->tellg();
stream_ptr->seekg(0LL, stream_ptr->end);
my_request.SetContentLength(static_cast<long>(stream_ptr->tellg()));
stream_ptr->seekg(start_pos);
auto uploadPartOutcomeCallable1 = s3_client.UploadPartCallable(my_request);
// finish upload
Aws::S3::Model::CompleteMultipartUploadRequest completeMultipartUploadRequest;
completeMultipartUploadRequest.SetBucket(bucket.c_str());
completeMultipartUploadRequest.SetKey(key.c_str());
completeMultipartUploadRequest.SetUploadId(upload_id.c_str());
UploadPartOutcome uploadPartOutcome1 = uploadPartOutcomeCallable1.get();
CompletedPart completedPart1;
completedPart1.SetPartNumber(1);
auto etag = uploadPartOutcome1.GetResult().GetETag();
// if etag must not be empty
assert(etag.empty());
completedPart1.SetETag(etag);
completeMultipartUploadRequest.SetBucket(bucket.c_str());
completeMultipartUploadRequest.SetKey(key.c_str());
completeMultipartUploadRequest.SetUploadId(upload_id.c_str());
CompletedMultipartUpload completedMultipartUpload;
completedMultipartUpload.AddParts(completedPart1);
completeMultipartUploadRequest.WithMultipartUpload(completedMultipartUpload);
auto completeMultipartUploadOutcome =
s3_client.CompleteMultipartUpload(completeMultipartUploadRequest);
if (!completeMultipartUploadOutcome.IsSuccess()) {
auto error = completeMultipartUploadOutcome.GetError();
std::stringstream ss;
ss << error << error.GetExceptionName() << ": " << error.GetMessage() << std::endl;
return -1;
}
}
I recommend using Transfer Manager in general.
But if you don’t want to for whatever reason, then you look at source of transfer manager to see how to do multipart upload using S3 APIs directly.
I want to write a C++ program to get associated applications which are suitable to open specified file. I find the LSCopyApplicationURLsForURL API, and create a command line C++ application by XCode.
But after running this program, I always get segment fault. XCode shows EXEC_BAD_ACCESS(code=1, address....) error.
I also tryied running it from sudo, but the same result. What is the problem?
The code:
#include <iostream>
#include <objc/objc.h>
#include <objc/objc-runtime.h>
#include <CoreFoundation/CoreFoundation.h>
#include <CoreServices/CoreServices.h>
using namespace std;
int main(int argc, const char * argv[]) {
auto url = CFURLRef("file:///Users/efan/src/a.cpp");
auto ret = LSCopyApplicationURLsForURL(url, kLSRolesAll);
cout << ret << endl;
return 0;
}
Try creating your CFURLRef using one of the proper CFURLCreate* methods. See "Creating a CFURL" here.
For example:
auto tempStringURL = CFStringCreateWithCString(nullptr, "/Users/efan/src/a.cpp", kCFStringEncodingUTF8);
auto url = CFURLCreateWithFileSystemPath(nullptr, tempStringURL, kCFURLPOSIXPathStyle, FALSE);
auto ret = LSCopyApplicationURLsForURL(url, kLSRolesAll);
You need to Release the "Created" variables to clean up memory.
I try to read in a gpkg file to extract geo informations like streets and buildings.
Therefor I started with this code:
#include "gdal_priv.h"
#include <iostream>
int main() {
GDALDataset* poDataset;
GDALAllRegister();
std::cout << "driver# " << GetGDALDriverManager()->GetDriverCount()
<< std::endl;
for (int i = 0; i < GetGDALDriverManager()->GetDriverCount(); i++) {
auto driver = GetGDALDriverManager()->GetDriver(i);
auto info = driver->GetDescription();
std::cout << "driver " << i << ": " << info << std::endl;
}
auto driver = GetGDALDriverManager()->GetDriverByName("GPKG");
poDataset = (GDALDataset*)GDALOpen("Building_LoD1.gpkg", GA_ReadOnly);
if (poDataset == NULL) {
// ...;
}
return 0;
}
The driver list contains GPKG, but the reading fails with an error that the file is not recognized as supported file format.
Doing a gdalinfo Building_LoD1.gpkg leads to the same error in the console. But I can open the file in QGIS.
And a gdalsrsinfo Building_LoD1.gpk reports:
PROJ.4 : +proj=somerc +lat_0=46.95240555555556 +lon_0=7.439583333333333 +k_0=1 +x_0=2600000 +y_0=1200000 +ellps=bessel +towgs84=674.374,15.056,405.346,0,0,0,0 +units=m +no_defs
OGC WKT :
PROJCS["CH1903+ / LV95",
GEOGCS["CH1903+",
DATUM["CH1903+",
SPHEROID["Bessel 1841",6377397.155,299.1528128,
AUTHORITY["EPSG","7004"]],
TOWGS84[674.374,15.056,405.346,0,0,0,0],
AUTHORITY["EPSG","6150"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.0174532925199433,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4150"]],
PROJECTION["Hotine_Oblique_Mercator_Azimuth_Center"],
PARAMETER["latitude_of_center",46.95240555555556],
PARAMETER["longitude_of_center",7.439583333333333],
PARAMETER["azimuth",90],
PARAMETER["rectified_grid_angle",90],
PARAMETER["scale_factor",1],
PARAMETER["false_easting",2600000],
PARAMETER["false_northing",1200000],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]],
AXIS["Easting",EAST],
AXIS["Northing",NORTH],
AUTHORITY["EPSG","2056"]]
Does anyone know why a gpkg file might be reported as not supported?
The gdal version is 2.3.2.
I figured out the problem. The reason for the message is not that the file format is not support by gdal, but that I used the wrong function to open the file.
If I want to read in a file that has vector information then I need to use:
GDALDataset* poDS;
poDS = (GDALDataset*)GDALOpenEx( "Building_LoD1.gpkg", GDAL_OF_VECTOR, NULL, NULL, NULL);