AWS C++ S3 SDK PutObjectRequest unable to connect to endpoint - c++

In working with the AWS C++ SDK I ran into an issue where trying to execute a PutObjectRequest complains that it is "unable to connect to endpoint" when uploaded more than ~400KB.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
Aws::S3::S3Client s3Client(clientConfig);
Aws::S3::Model::PutObjectRequest putObjectRequest;
putObjectRequest.SetBucket("mybucket");
putObjectRequest.SetKey("mykey");
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> stmbuf(compressedData, dataSize);
std::iostream *stm = new std::iostream(&stmbuf);
putObjectRequest.SetBody(std::shared_ptr<Aws::IOStream>(stm));
putObjectRequest.SetContentLength(dataSize);
Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(putObjectRequest);
As long as my data is less than ~400KB it gets uploaded into a file on S3 but beyond that it is unable to connect to endpoint. I should be able to upload up to 5GB in one PutObjectRequest.
Any thoughts?
Edit:
Responding to #JonathanHenson's comment, the AWS log shows this timeout error repeatedly:
[DEBUG] 2016-08-04 13:42:03 AWSClient [0x700000081000] Request Successfully signed
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Making request to https://s3.amazonaws.com/mybucket/myfile
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Including headers:
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] content-length: 3151261
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] content-type: binary/octet-stream
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] host: s3.amazonaws.com
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] user-agent: aws-sdk-cpp/0.13.9 Darwin/15.6.0 x86_64
[DEBUG] 2016-08-04 13:42:03 CurlHandleContainer [0x700000081000] Attempting to acquire curl connection.
[DEBUG] 2016-08-04 13:42:03 CurlHandleContainer [0x700000081000] Returning connection handle 0x10b09cc00
[DEBUG] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Obtained connection handle 0x10b09cc00
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] HTTP/1.1 100 Continue
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000]
[ERROR] 2016-08-04 13:42:06 CurlHttpClient [0x700000081000] Curl returned error code 28
[DEBUG] 2016-08-04 13:42:06 CurlHandleContainer [0x700000081000] Releasing curl handle 0x10b09cc00
[DEBUG] 2016-08-04 13:42:06 CurlHandleContainer [0x700000081000] Notifying waiting threads.
[DEBUG] 2016-08-04 13:42:06 AWSClient [0x700000081000] Request returned error. Attempting to generate appropriate error codes from response
[WARN] 2016-08-04 13:42:06 AWSClient [0x700000081000] Request failed, now waiting 12800 ms before attempting again.
[DEBUG] 2016-08-04 13:42:19 InstanceProfileCredentialsProvider [0x700000081000] Checking if latest credential pull has expired.

Ultimately what fixed this for me was setting the request timeout. The request time out needs to be long enough for your entire transfer to finish. If you are transferring large files on a slow internet connection make sure the request timeout is long enough to allow those files to transfer.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
clientConfig.connectTimeoutMs = 30000;
clientConfig.requestTimoutMs = 600000;

Tweak your config file to below.And see it will work.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
clientConfig.connectTimeoutMs = 30000;
Aws::S3::S3Client s3Client(clientConfig);
Aws::S3::Model::PutObjectRequest putObjectRequest;
putObjectRequest.SetBucket("mybucket");
putObjectRequest.SetKey("mykey");
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> stmbuf(compressedData, dataSize);
std::iostream *stm = new std::iostream(&stmbuf);
putObjectRequest.SetBody(std::shared_ptr<Aws::IOStream>(stm));
putObjectRequest.SetContentLength(dataSize);
Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(putObjectRequest);

Related

gsutil timeout in every call refreshing access_token

This might be a duplicate but none of the previous answers match my conditions.
I installed gsutil as part of the google-cloud-sdk following https://cloud.google.com/sdk/docs/install. I could configure gcloud properly without errors.
Every time I try to use gsutil, like for example with gsutil -D ls, I get
INFO 0518 14:52:16.412453 base_api.py] Body: (none)
INFO 0518 14:52:16.412517 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 14:52:16.412719 multiprocess_file_storage.py] Read credential file
DEBUG 0518 14:52:16.412842 multiprocess_file_storage.py] Read credential file
INFO 0518 14:52:16.412883 reauth_creds.py] Refreshing access_token
INFO 0518 14:53:16.546304 retry_util.py] Retrying request, attempt #1...
DEBUG 0518 14:53:16.546867 http_wrapper.py] Caught socket error, retrying: timed out
DEBUG 0518 14:53:16.547127 http_wrapper.py] Retrying request to url https://storage.googleapis.com/storage/v1/b?alt=blablabla after exception timed out
and more and more of those retries.
I see some users here that experienced the same, for instance this points out to a WAN Blocking setting enabled, that is not my case. Here the OP says that it was a human error regarding proxy settings, but I don't have any
➜ set | grep -i proxy
The same proxy thing seems to have solved it for another OP
In the same question another user says that it might be due to a conflicting ~/.boto config file, so I deleted it and tried, with the same results.
I tried reinstalling google SDK several times with the same result.
I tried configuring gsutil as a standalone application setting gcloud config set pass_credentials_to_gsutil false and running gsutil config. Again, without luck
This user seems to be experiencing my same problem, but he ends up saying that his solution was to restart the shell (exec -l $SHELL) and quitting/reopening the command line and keep trying until it works...
So my question is, does anyone know a reliable non-proxy related way to solve this retrying issue in gsutil?
EDIT 1:
The output of curl -v https://storage.googleapis.com/storage/v1/b is
* Trying 2800:3f0:4002:800::2010:443...
* TCP_NODELAY set
* Trying 172.217.30.240:443...
* TCP_NODELAY set
* Connected to storage.googleapis.com (172.217.30.240) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.storage.googleapis.com
* start date: Apr 13 10:15:35 2021 GMT
* expire date: Jul 6 10:15:34 2021 GMT
* subjectAltName: host "storage.googleapis.com" matched cert's "*.googleapis.com"
* issuer: C=US; O=Google Trust Services; CN=GTS CA 1O1
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x564e279b8e10)
> GET /storage/v1/b HTTP/2
> Host: storage.googleapis.com
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 400
< x-guploader-uploadid: ABg5-Ux5VPhppIWB7G_da1ydkOJWv1BqXepMdpyJDPZ3zbTSRwPoqE44IqaPQMzLvWbSOab0bePewJXiwBXPpus9JDs
< content-type: application/json; charset=UTF-8
< date: Tue, 18 May 2021 19:41:52 GMT
< vary: Origin
< vary: X-Origin
< cache-control: no-cache, no-store, max-age=0, must-revalidate
< expires: Mon, 01 Jan 1990 00:00:00 GMT
< pragma: no-cache
< content-length: 297
< server: UploadServer
< alt-svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
<
{
"error": {
"code": 400,
"message": "Required parameter: project",
"errors": [
{
"message": "Required parameter: project",
"domain": "global",
"reason": "required",
"locationType": "parameter",
"location": "project"
}
]
}
}
* Connection #0 to host storage.googleapis.com left intact
EDIT 2:
The complete output of gsutil -D ls is:
➜ gsutil -D ls
***************************** WARNING *****************************
*** You are running gsutil with debug output enabled.
*** Be aware that debug output includes authentication credentials.
*** Make sure to remove the value of the Authorization header for
*** each HTTP request printed to the console prior to posting to
*** a public medium such as a forum post or Stack Overflow.
***************************** WARNING *****************************
gsutil version: 4.62
checksum: fe14a00285d4702ed626050d0f9ae955 (OK)
boto version: 2.49.0
python version: 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
OS: Linux 5.8.0-50-generic
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /home/username/.config/gcloud/legacy_credentials/username#mail.com/.boto
gsutil path: /home/username/google-cloud-sdk/bin/gsutil
compiled crcmod: False
installed via package manager: False
editable install: False
Command being run: /home/username/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=default_project -o Credentials:use_client_certificate=False -D ls
config_file_list: ['/home/username/.config/gcloud/legacy_credentials/username#mail.com/.boto']
config: [('working_dir', '/mnt/pyami'), ('debug', '0'), ('https_validate_certificates', 'true'), ('working_dir', '/mnt/pyami'), ('debug', '0'), ('default_project_id', 'default_project')]
DEBUG 0518 17:46:39.910250 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:46:39.910444 multiprocess_file_storage.py] Read credential file
INFO 0518 17:46:39.910933 base_api.py] Calling method storage.buckets.list with StorageBucketsListRequest: <StorageBucketsListRequest
maxResults: 1000
project: 'default_project'
projection: ProjectionValueValuesEnum(noAcl, 1)>
INFO 0518 17:46:39.911321 base_api.py] Making http GET to https://storage.googleapis.com/storage/v1/b?alt=json&fields=nextPageToken%2Citems%2Fid&maxResults=1000&project=default_project&projection=noAcl
INFO 0518 17:46:39.911495 base_api.py] Headers: {'accept': 'application/json',
'accept-encoding': 'gzip, deflate',
'content-length': '0',
'user-agent': 'apitools Python/3.8.5 gsutil/4.62 (linux) analytics/disabled '
'interactive/True command/ls google-cloud-sdk/341.0.0'}
INFO 0518 17:46:39.911611 base_api.py] Body: (none)
INFO 0518 17:46:39.911687 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 17:46:39.911900 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:46:39.912035 multiprocess_file_storage.py] Read credential file
INFO 0518 17:46:39.912081 reauth_creds.py] Refreshing access_token
INFO 0518 17:47:40.014159 retry_util.py] Retrying request, attempt #1...
DEBUG 0518 17:47:40.014368 http_wrapper.py] Caught socket error, retrying: timed out
DEBUG 0518 17:47:40.014440 http_wrapper.py] Retrying request to url https://storage.googleapis.com/storage/v1/b?alt=json&fields=nextPageToken%2Citems%2Fid&maxResults=1000&project=default_project&projection=noAcl after exception timed out
INFO 0518 17:47:41.531516 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 17:47:41.532971 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:47:41.533422 multiprocess_file_storage.py] Read credential file
INFO 0518 17:47:41.533568 reauth_creds.py] Refreshing access_token
INFO 0518 17:48:41.590354 retry_util.py] Retrying request, attempt #2...
DEBUG 0518 17:48:41.590671 http_wrapper.py] Caught socket error, retrying: timed out
DEBUG 0518 17:48:41.590815 http_wrapper.py] Retrying request to url https://storage.googleapis.com/storage/v1/b?alt=json&fields=nextPageToken%2Citems%2Fid&maxResults=1000&project=default_project&projection=noAcl after exception timed out
INFO 0518 17:48:46.107598 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 17:48:46.108518 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:48:46.108928 multiprocess_file_storage.py] Read credential file
INFO 0518 17:48:46.109037 reauth_creds.py] Refreshing access_token
^CDEBUG: Exception stack trace:
NoneType: None
DEBUG: Caught CTRL-C (signal 2) - Exception stack trace:
File "/home/username/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
gsutil.RunMain()
File "/home/username/google-cloud-sdk/platform/gsutil/gsutil.py", line 122, in RunMain
sys.exit(gslib.__main__.main())
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 435, in main
return _RunNamedCommandAndHandleExceptions(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 631, in _RunNamedCommandAndHandleExceptions
return command_runner.RunNamedCommand(command_name,
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 410, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 568, in RunCommand
for blr in self.WildcardIterator(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 484, in IterBuckets
for blr in self._ExpandBucketWildcards(bucket_fields=bucket_fields):
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 400, in _ExpandBucketWildcards
for bucket in self.gsutil_api.ListBuckets(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/gcs_json_api.py", line 703, in ListBuckets
bucket_list = self.api_client.buckets.List(apitools_request,
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/third_party/storage_apitools/storage_v1_client.py", line 362, in List
return self._RunMethod(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 734, in _RunMethod
http_response = http_wrapper.MakeRequest(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/http_wrapper.py", line 348, in MakeRequest
return _MakeRequestNoRetry(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/http_wrapper.py", line 397, in _MakeRequestNoRetry
info, content = http.request(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/vendored/oauth2client/oauth2client/transport.py", line 159, in new_request
credentials._refresh(orig_request_method)
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/vendored/oauth2client/oauth2client/client.py", line 761, in _refresh
self._do_refresh_request(http)
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/reauth_creds.py", line 112, in _do_refresh_request
self._update(*reauth.refresh_access_token(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/reauth.py", line 267, in refresh_access_token
response, content = _reauth_client.refresh_grant(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/_reauth_client.py", line 147, in refresh_grant
response, content = http_request(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/reauth_creds.py", line 105, in http_request
response, content = transport.request(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/vendored/oauth2client/oauth2client/transport.py", line 280, in request
return http_callable(uri, method=method, body=body, headers=headers,
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1985, in request
(response, content) = self._request(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1650, in _request
(response, content) = self._conn_request(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1557, in _conn_request
conn.connect()
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1324, in connect
sock.connect((self.host, self.port))
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/sig_handling.py", line 92, in _SignalHandler
_final_signal_handlers[signal_num](signal_num, cur_stack_frame)
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 519, in _HandleControlC
stack_trace = ''.join(traceback.format_list(traceback.extract_stack()))
I have the same exact error, and I found out that its due to my machine resolving hosts into ipv6 which causes timeouts, so in ubuntu I just have to disable ipv6 and restart, after that YAY! it works now I can gsutil.
for some other to encounter same issue try doing this. :)
After giving up on this I decided to reinstall one last time the whole google-cloud-sdk suite, but this time using the snap version. Installing it via snap solved the issue for me. I think this points to some issue with my environment that was bypassed thanks to the snap containerization.
So no clear answer here, but if anyone is experiencing the same problem giving a chance to snap may solve the issue as it did for me

XRAY shutdown in sidecar container in ECS fargate

I'm trying to make XRAY work as a sidecar container on ECS fargate.
However, it is shutting down and stopping the task.
These are the logs:
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Shutdown Initiated. Current epoch in nanoseconds: 1619452799073953600
2021-04-26 08:59:592021-04-26T15:59:59Z [Info] Got shutdown signal: terminated
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Skipped telemetry data as no segments found
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] telemetry: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Segment batch: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] processor: done!
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Trace segment: received: 0, truncated: 0, processed: 0
2021-04-26 08:59:592021-04-26T15:59:59Z [Debug] Shutdown finished. Current epoch in nanoseconds: 1619452799074286437
2021-04-26 08:59:582021-04-26T15:59:58Z [Info] Starting proxy http server on 0.0.0.0:2000
2021-04-26 08:59:582021-04-26T15:59:58Z [Error] Get instance id metadata failed: RequestError: send request failed
2021-04-26 08:59:58caused by: Get http://169.254.169.254/latest/meta-data/instance-id: dial tcp 169.254.169.254:80: connect: invalid argument
2021-04-26 08:59:582021-04-26T15:59:58Z [Debug] Using Endpoint: https://xray.us-east-1.amazonaws.com
2021-04-26 08:59:582021-04-26T15:59:58Z [Debug] Telemetry initiated
2021-04-26 08:59:582021-04-26T15:59:58Z [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.us-east-1.amazonaws.com
2021-04-26 08:59:582021-04-26T15:59:58Z [Debug] Using Endpoint: https://xray.us-east-1.amazonaws.com
2021-04-26 08:59:582021-04-26T15:59:58Z [Debug] Batch size: 50
2021-04-26 08:59:572021-04-26T15:59:57Z [Debug] Get hostname metadata failed: RequestError: send request failed
2021-04-26 08:59:57caused by: Get http://169.254.169.254/latest/meta-data/hostname: dial tcp 169.254.169.254:80: connect: invalid argument
2021-04-26 08:59:572021-04-26T15:59:57Z [Debug] Using proxy address:
2021-04-26 08:59:572021-04-26T15:59:57Z [Debug] Fetch region us-east-1 from environment variables
2021-04-26 08:59:572021-04-26T15:59:57Z [Info] Using region: us-east-1
2021-04-26 08:59:572021-04-26T15:59:57Z [Debug] ARN of the AWS resource running the daemon:
2021-04-26 08:59:572021-04-26T15:59:57Z [Info] Initializing AWS X-Ray daemon 3.2.0
2021-04-26 08:59:572021-04-26T15:59:57Z [Debug] Listening on UDP 0.0.0.0:2000
2021-04-26 08:59:572021-04-26T15:59:57Z [Info] Using buffer memory limit of 37 MB
2021-04-26 08:59:572021-04-26T15:59:57Z [Info] 592 segment buffers allocated
I found this and checked I have everything I needed: https://github.com/aws/aws-app-mesh-examples/blob/db4a8d49ab61c62dbc254cd4d35a3911df4cc32c/walkthroughs/howto-alb/app.yaml#L61, particularly for the TASK role (and permissions).
I don't understand what's going on and googling doesn't provide good hints neither.
I will appreciate help.
Thanks

Unable to create new s3 bucket in terraform

I'm attempting to create a new s3 bucket and getting a conflict though I know the bucket name is new, unique, and has been many hours (8+) since that name was in use. Details attached. I've even tried with a new name that I know was never a bucket in my account (and likely never a bucket).
The name in the logs below is made up and not the one I was using, which was unique and namespaced to my domain.
If I use the aws s3 cli to make the bucket (i.e. aws s3 mb s3://{same-bucket-name} --region us-east-2) where {same-bucket-name} is the name of the bucket I want to create, it works fine.
2019-07-07T00:12:19.463-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [DEBUG] Trying to create new S3 bucket: "my-unique-s3-bucket-name"
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [DEBUG] [aws-sdk-go] DEBUG: Request s3/CreateBucket Details:
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: ---[ REQUEST POST-SIGN ]-----------------------------
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: PUT /my-unique-s3-bucket-name HTTP/1.1
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Host: s3.us-east-2.amazonaws.com
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: User-Agent: aws-sdk-go/1.20.12 (go1.12.5; darwin; amd64) APN/1.0 HashiCorp/1.0 Terraform/0.12.2
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Content-Length: 153
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Authorization: AWS4-HMAC-SHA256 Credential=MYCREDS/20190707/us-east-2/s3/aws4_request, SignedHeaders=content-length;host;x-amz-acl;x-amz-content-sha256;x-amz-date, Signature=b5acd2dbcaf09eda51b4ea8448f1991d26c8eb8249a85e7ac28044864df377b9
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: X-Amz-Acl: public-read
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: X-Amz-Content-Sha256: 70cae86320841ea73b0bdc759f99920c7caa405e61af2742575750c6586272c9
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: X-Amz-Date: 20190707T041219Z
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Accept-Encoding: gzip
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4:
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: <CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>us-east-2</LocationConstraint></CreateBucketConfiguration>
2019-07-07T00:12:19.464-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: -----------------------------------------------------
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [DEBUG] [aws-sdk-go] DEBUG: Response s3/CreateBucket Details:
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: ---[ RESPONSE ]--------------------------------------
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: HTTP/1.1 409 Conflict
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Connection: close
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Transfer-Encoding: chunked
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Content-Type: application/xml
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Date: Sun, 07 Jul 2019 04:12:19 GMT
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: Server: AmazonS3
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: X-Amz-Id-2: v5M1x31BcVCS4DLIgqmCR4KRHipO3ZRbTSXF1PCS9+q9nyT8O5/3s04Z22o8t4x8JZ0HF9HWkO4=
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: X-Amz-Request-Id: 835B636D828335A1
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4:
2019-07-07T00:12:19.697-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4:
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: -----------------------------------------------------
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: <Error><Code>OperationAborted</Code><Message>A conflicting conditional operation is currently in progress against this resource. Please try again.</Message><RequestId>835B636D828335A1</RequestId><HostId>v5M1x31BcVCS4DLIgqmCR4KRHipO3ZRbTSXF1PCS9+q9nyT8O5/3s04Z22o8t4x8JZ0HF9HWkO4=</HostId></Error>
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/CreateBucket failed, attempt 0/25, error OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: status code: 409, request id: 835B636D828335A1, host id: v5M1x31BcVCS4DLIgqmCR4KRHipO3ZRbTSXF1PCS9+q9nyT8O5/3s04Z22o8t4x8JZ0HF9HWkO4=
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [WARN] Got an error while trying to create S3 bucket my-unique-s3-bucket-name: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: status code: 409, request id: 835B636D828335A1, host id: v5M1x31BcVCS4DLIgqmCR4KRHipO3ZRbTSXF1PCS9+q9nyT8O5/3s04Z22o8t4x8JZ0HF9HWkO4=
2019-07-07T00:12:19.698-0400 [DEBUG] plugin.terraform-provider-aws_v2.18.0_x4: 2019/07/07 00:12:19 [TRACE] Waiting 10s before next try
If the bucket did previously exist then there is an indeterminate amount of time before that bucket name is released.
Unfortunately the AWS docs aren't very specific here:
Important
If you want to continue to use the same bucket name, don't delete the
bucket. We recommend that you empty the bucket and keep it. After a
bucket is deleted, the name becomes available to reuse, but the name
might not be available for you to reuse for various reasons. For
example, it might take some time before the name can be reused, and
some other account could create a bucket with that name before you do.
You can talk to AWS support to confirm what's happening (and check that another AWS account doesn't have the bucket) but ultimately you just need to wait. If the S3 bucket matches a domain name that you control and you intend to use it for website hosting and someone else already has that S3 bucket then there is a process for getting that bucket name back to you, just as there is with CloudFront CNAMEs which are also globally unique.
You should also be able to check if the bucket name is available by running the following command:
aws s3api head-bucket --bucket [bucket name]
Ages back when we briefly tried deleting S3 buckets in test environments over night (along with everything else) we would occasionally see this error for over 48 hours while sometimes the bucket name was available again within a few hours. Unfortunately, AWS provide no guarantees here.

Dataflow setting Controller Service Account

I try to set up controller service account for Dataflow. In my dataflow options I have:
options.setGcpCredential(GoogleCredentials.fromStream(
new FileInputStream("key.json")).createScoped(someArrays));
options.setServiceAccount("xxx#yyy.iam.gserviceaccount.com");
But I'm getting:
WARNING: Request failed with code 403, performed 0 retries due to IOExceptions,
performed 0 retries due to unsuccessful status codes, HTTP framework says
request can be retried, (caller responsible for retrying):
https://dataflow.googleapis.com/v1b3/projects/MYPROJECT/locations/MYLOCATION/jobs
Exception in thread "main" java.lang.RuntimeException: Failed to create a workflow
job: (CODE): Current user cannot act as
service account "xxx#yyy.iam.gserviceaccount.com.
Causes: (CODE): Current user cannot act as
service account "xxx#yyy.iam.gserviceaccount.com.
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:791)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:173)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:311)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
...
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "(CODE): Current user cannot act as service account
xxx#yyy.iam.gserviceaccount.com. Causes: (CODE): Current user
cannot act as service account xxx#yyy.iam.gserviceaccount.com.",
"reason" : "forbidden"
} ],
"message" : "(CODE): Current user cannot act as service account
xxx#yyy.iam.gserviceaccount.com. Causes: (CODE): Current user
cannot act as service account xxx#yyy.iam.gserviceaccount.com.",
"status" : "PERMISSION_DENIED"
}
Am I missing some Roles or permissions?
Maybe someone is going to find it helpful:
For controller it was: Dataflow Worker and Storage Object Admin (that was found in Google's documentation).
For executor it was: Service Account User.
I've been hitting this error and thought it worth sharing my experiences (partly because I suspect I'll encounter this again in the future).
The terraform code to create my dataflow job is:
resource "google_dataflow_job" "wordcount" {
# https://stackoverflow.com/a/59931467/201657
name = "wordcount"
template_gcs_path = "gs://dataflow-templates/latest/Word_Count"
temp_gcs_location = "gs://${local.name-prefix}-functions/temp"
parameters = {
inputFile = "gs://dataflow-samples/shakespeare/kinglear.txt"
output = "gs://${local.name-prefix}-functions/wordcount/output"
}
service_account_email = "serviceAccount:${data.google_service_account.sa.email}"
}
The error message:
Error: googleapi: Error 400: (c3c0d991927a8658): Current user cannot act as service account serviceAccount:dataflowdemo#redacted.iam.gserviceaccount.com., badRequest
was returned from running terraform apply. Checking out the logs provided a lot more info:
gcloud logging read 'timestamp >= "2020-12-31T13:39:58.733249492Z" AND timestamp <= "2020-12-31T13:45:58.733249492Z"' --format="csv(timestamp,severity,textPayload)" --order=asc
which returned various log records, including this:
Permissions verification for controller service account failed. IAM role roles/dataflow.worker should be granted to controller service account dataflowdemo#redacted.iam.gserviceaccount.com.
so I granted that missing role grant
gcloud projects add-iam-policy-binding $PROJECT \
--member="serviceAccount:dataflowdemo#${PROJECT}.iam.gserviceaccount.com" \
--role="roles/dataflow.worker"
and ran terraform apply again. This time I got the same error in the terraform output but there were no errors to be seen in the logs.
I then followed the advice given at https://cloud.google.com/dataflow/docs/concepts/access-control#creating_jobs to also grant the roles/dataflow.admin:
gcloud projects add-iam-policy-binding $PROJECT \
--member="serviceAccount:dataflowdemo#${PROJECT}.iam.gserviceaccount.com" \
--role="roles/dataflow.admin"
but there was no discernible difference from the previous attempt.
I then tried turning on terraform debug logging which provided this info:
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: ---[ REQUEST ]---------------------------------------
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: POST /v1b3/projects/redacted/locations/europe-west1/templates?alt=json&prettyPrint=false HTTP/1.1
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Host: dataflow.googleapis.com
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: User-Agent: google-api-go-client/0.5 Terraform/0.14.2 (+https://www.terraform.io) Terraform-Plugin-SDK/2.1.0 terraform-provider-google/dev
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Content-Length: 385
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Content-Type: application/json
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: X-Goog-Api-Client: gl-go/1.14.5 gdcl/20201023
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Accept-Encoding: gzip
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5:
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: {
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "environment": {
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "serviceAccountEmail": "serviceAccount:dataflowdemo#redacted.iam.gserviceaccount.com",
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "tempLocation": "gs://jamiet-demo-functions/temp"
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: },
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "gcsPath": "gs://dataflow-templates/latest/Word_Count",
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "jobName": "wordcount",
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "parameters": {
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "inputFile": "gs://dataflow-samples/shakespeare/kinglear.txt",
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "output": "gs://jamiet-demo-functions/wordcount/output"
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: }
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: }
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5:
2020-12-31T16:04:13.129Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: -----------------------------------------------------
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: 2020/12/31 16:04:14 [DEBUG] Google API Response Details:
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: ---[ RESPONSE ]--------------------------------------
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: HTTP/1.1 400 Bad Request
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Connection: close
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Transfer-Encoding: chunked
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Alt-Svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Cache-Control: private
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Content-Type: application/json; charset=UTF-8
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Date: Thu, 31 Dec 2020 16:04:15 GMT
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Server: ESF
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Vary: Origin
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Vary: X-Origin
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: Vary: Referer
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: X-Content-Type-Options: nosniff
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: X-Frame-Options: SAMEORIGIN
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: X-Xss-Protection: 0
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5:
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: 1f9
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: {
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "error": {
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "code": 400,
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "message": "(dbacb1c39beb28c9): Current user cannot act as service account serviceAccount:dataflowdemo#redacted.iam.gserviceaccount.com.",
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "errors": [
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: {
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "message": "(dbacb1c39beb28c9): Current user cannot act as service account serviceAccount:dataflowdemo#redacted.iam.gserviceaccount.com.",
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "domain": "global",
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "reason": "badRequest"
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: }
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: ],
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: "status": "INVALID_ARGUMENT"
orm-provider-google_v3.51.0_x5: }
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: }
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5:
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: 0
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5:
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5:
2020-12-31T16:04:14.647Z [DEBUG] plugin.terraform-provider-google_v3.51.0_x5: -----------------------------------------------------
The error being returned from dataflow.googleapis.com is clearly evident:
Current user cannot act as service account serviceAccount:dataflowdemo#redacted.iam.gserviceaccount.com
At this stage I am puzzled as to why I can see an error being returned from the Google's dataflow API but there is nothing in the GCP logs indicating that an error occurred.
Then tho I had a bit of a lightbulb moment. Why does that error message mention "service account serviceAccount"? Then it hit me, I'd defined the service account incorrectly. Terraform code should have been:
resource "google_dataflow_job" "wordcount" {
# https://stackoverflow.com/a/59931467/201657
name = "wordcount"
template_gcs_path = "gs://dataflow-templates/latest/Word_Count"
temp_gcs_location = "gs://${local.name-prefix}-functions/temp"
parameters = {
inputFile = "gs://dataflow-samples/shakespeare/kinglear.txt"
output = "gs://${local.name-prefix}-functions/wordcount/output"
}
service_account_email = data.google_service_account.sa.email
}
I corrected it and it worked straight away. User error!!!
I then set about removing the various permissions that I'd added:
gcloud projects remove-iam-policy-binding $PROJECT \
--member="serviceAccount:dataflowdemo#${PROJECT}.iam.gserviceaccount.com" \
--role="roles/dataflow.admin"
gcloud projects remove-iam-policy-binding $PROJECT \
--member="serviceAccount:dataflowdemo#${PROJECT}.iam.gserviceaccount.com" \
--role="roles/dataflow.worker"
and terraform apply still worked. However, after removing the grant of role roles/dataflow.worker the job failed with error:
Workflow failed. Causes: Permissions verification for controller service account failed. IAM role roles/dataflow.worker should be granted to controller service account dataflowdemo#redacted.iam.gserviceaccount.com.
so clearly the documentation regarding the appropriate roles to grant (https://cloud.google.com/dataflow/docs/concepts/access-control#creating_jobs) is spot on.
As may be apparent, I started writing this post before I knew what the problem was and I thought it might be useful to document my investigation somewhere. Now that I've finished the investigation and the problem turns out to be one of PEBCAK its probably not so relevant to this thread anymore, and certainly shouldn't be accepted as an answer. Nevertheless, there is probably some useful information in here about how to go about investigating issues with terraform calling Google APIs, and it also reiterates the required role grants, so I'll leave it here in case it ever turns out to be useful.
I just hit this problem again so posting my solution up here as I fully expect I'll get bitten by this again at some point.
I was getting error:
Error: googleapi: Error 403: (a00eba23d59c1fa3): Current user cannot act as service account dataflow-controller-sa#myproject.iam.gserviceaccount.com. Causes: (a00eba23d59c15ac): Current user cannot act as service account dataflow-controller-sa#myproject.iam.gserviceaccount.com., forbidden
I was deploying the dataflow job, via terraform, using a different service account, deployer#myproject.iam.gserviceaccount.com
The solution was to grant that service account the roles/iam.serviceAccountUser role:
gcloud projects add-iam-policy-binding myproject \
--member=serviceAccount:deployer#myproject.iam.gserviceaccount.com \
--role=roles/iam.serviceAccountUser
For those that prefer custom IAM roles over predefined IAM roles the specific permission that was missing was iam.serviceAccounts.actAs.
Issue Got Resolved!
Go to GCP -> Console -> IAM -> ServiceAccount Email -> Add Permission -> Service Account User. as below

AWS c++ SDK Aws::CognitoIdentityProvider::CognitoIdentityProviderClient InitiateAuth

I am trying to initiate Aws::CognitoIdentityProvider::CognitoIdentityProviderClient InitiateAuth
Here is my code:
Aws::Client::ClientConfiguration clientConfiguration;
clientConfiguration.region = regionId;
clientConfiguration.proxyHost = "<>";
clientConfiguration.proxyPort = <>;
clientConfiguration.proxyScheme = Aws::Http::Scheme::HTTP;
Aws::CognitoIdentityProvider::Model::InitiateAuthRequest authRequest;
authRequest.SetAuthFlow(Aws::CognitoIdentityProvider::Model::AuthFlowType::USER_SRP_AUTH);
authRequest.SetClientId(clientId);
authParameters.insert(std::make_pair("USERNAME", userName));
authParameters.insert(std::make_pair("SRP_A", srp_A));
authRequest.SetAuthParameters(authParameters);
std::shared_ptr<Aws::CognitoIdentityProvider::CognitoIdentityProviderClient> cognitoClient = std::make_shared<Aws::CognitoIdentityProvider::CognitoIdentityProviderClient>(clientConfiguration);
auto authRequestResult = cognitoClient->InitiateAuth(authRequest);
Here is what I get in the Trace logs:
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] content-length: 935
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] content-type: application/x-amz-json-1.1
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] host: cognito-idp.eu-west-1.amazonaws.com
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] user-agent: aws-sdk-cpp/1.6.30 Linux/3.10.0-862.9.1.el7.x86_64 x86_64 GCC/4.8.5
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] x-amz-target: AWSCognitoIdentityProviderService.InitiateAuth
[DEBUG] 2018-10-24 14:02:26 CURL [140737353963648] (HeaderIn) HTTP/1.1 400 Bad Request
I have managed to debug and I think that behind the scene the code is executing an equivalent of:
curl -i \
-H "transfer-encoding:" \
-H "content-length: 914" \
-H "content-type: application/x-amz-json-1.1" \
-H "host: cognito-idp.eu-west-1.amazonaws.com" \
-H "user-agent: aws-sdk-cpp/1.6.30 Linux/3.10.0-862.9.1.el7.x86_64 x86_64 GCC/4.8.5" \
-H "x-amz-target: AWSCognitoIdentityProviderService.InitiateAuth" \
-X POST \
-d '{"AuthFlow":"USER_SRP_AUTH","AuthParameters":{"SRP_A":"<>","USERNAME":"<>"},"ClientId":"<>"}' \
https://cognito-idp.eu-west-1.amazonaws.com --verbose --proxy http://proxy:port
I do not know why it does not work ? Proxy or wrong setup in the code?
Please help.