gsutil timeout in every call refreshing access_token - google-cloud-platform

This might be a duplicate but none of the previous answers match my conditions.
I installed gsutil as part of the google-cloud-sdk following https://cloud.google.com/sdk/docs/install. I could configure gcloud properly without errors.
Every time I try to use gsutil, like for example with gsutil -D ls, I get
INFO 0518 14:52:16.412453 base_api.py] Body: (none)
INFO 0518 14:52:16.412517 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 14:52:16.412719 multiprocess_file_storage.py] Read credential file
DEBUG 0518 14:52:16.412842 multiprocess_file_storage.py] Read credential file
INFO 0518 14:52:16.412883 reauth_creds.py] Refreshing access_token
INFO 0518 14:53:16.546304 retry_util.py] Retrying request, attempt #1...
DEBUG 0518 14:53:16.546867 http_wrapper.py] Caught socket error, retrying: timed out
DEBUG 0518 14:53:16.547127 http_wrapper.py] Retrying request to url https://storage.googleapis.com/storage/v1/b?alt=blablabla after exception timed out
and more and more of those retries.
I see some users here that experienced the same, for instance this points out to a WAN Blocking setting enabled, that is not my case. Here the OP says that it was a human error regarding proxy settings, but I don't have any
➜ set | grep -i proxy
The same proxy thing seems to have solved it for another OP
In the same question another user says that it might be due to a conflicting ~/.boto config file, so I deleted it and tried, with the same results.
I tried reinstalling google SDK several times with the same result.
I tried configuring gsutil as a standalone application setting gcloud config set pass_credentials_to_gsutil false and running gsutil config. Again, without luck
This user seems to be experiencing my same problem, but he ends up saying that his solution was to restart the shell (exec -l $SHELL) and quitting/reopening the command line and keep trying until it works...
So my question is, does anyone know a reliable non-proxy related way to solve this retrying issue in gsutil?
EDIT 1:
The output of curl -v https://storage.googleapis.com/storage/v1/b is
* Trying 2800:3f0:4002:800::2010:443...
* TCP_NODELAY set
* Trying 172.217.30.240:443...
* TCP_NODELAY set
* Connected to storage.googleapis.com (172.217.30.240) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.storage.googleapis.com
* start date: Apr 13 10:15:35 2021 GMT
* expire date: Jul 6 10:15:34 2021 GMT
* subjectAltName: host "storage.googleapis.com" matched cert's "*.googleapis.com"
* issuer: C=US; O=Google Trust Services; CN=GTS CA 1O1
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x564e279b8e10)
> GET /storage/v1/b HTTP/2
> Host: storage.googleapis.com
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 400
< x-guploader-uploadid: ABg5-Ux5VPhppIWB7G_da1ydkOJWv1BqXepMdpyJDPZ3zbTSRwPoqE44IqaPQMzLvWbSOab0bePewJXiwBXPpus9JDs
< content-type: application/json; charset=UTF-8
< date: Tue, 18 May 2021 19:41:52 GMT
< vary: Origin
< vary: X-Origin
< cache-control: no-cache, no-store, max-age=0, must-revalidate
< expires: Mon, 01 Jan 1990 00:00:00 GMT
< pragma: no-cache
< content-length: 297
< server: UploadServer
< alt-svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
<
{
"error": {
"code": 400,
"message": "Required parameter: project",
"errors": [
{
"message": "Required parameter: project",
"domain": "global",
"reason": "required",
"locationType": "parameter",
"location": "project"
}
]
}
}
* Connection #0 to host storage.googleapis.com left intact
EDIT 2:
The complete output of gsutil -D ls is:
➜ gsutil -D ls
***************************** WARNING *****************************
*** You are running gsutil with debug output enabled.
*** Be aware that debug output includes authentication credentials.
*** Make sure to remove the value of the Authorization header for
*** each HTTP request printed to the console prior to posting to
*** a public medium such as a forum post or Stack Overflow.
***************************** WARNING *****************************
gsutil version: 4.62
checksum: fe14a00285d4702ed626050d0f9ae955 (OK)
boto version: 2.49.0
python version: 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
OS: Linux 5.8.0-50-generic
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /home/username/.config/gcloud/legacy_credentials/username#mail.com/.boto
gsutil path: /home/username/google-cloud-sdk/bin/gsutil
compiled crcmod: False
installed via package manager: False
editable install: False
Command being run: /home/username/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=default_project -o Credentials:use_client_certificate=False -D ls
config_file_list: ['/home/username/.config/gcloud/legacy_credentials/username#mail.com/.boto']
config: [('working_dir', '/mnt/pyami'), ('debug', '0'), ('https_validate_certificates', 'true'), ('working_dir', '/mnt/pyami'), ('debug', '0'), ('default_project_id', 'default_project')]
DEBUG 0518 17:46:39.910250 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:46:39.910444 multiprocess_file_storage.py] Read credential file
INFO 0518 17:46:39.910933 base_api.py] Calling method storage.buckets.list with StorageBucketsListRequest: <StorageBucketsListRequest
maxResults: 1000
project: 'default_project'
projection: ProjectionValueValuesEnum(noAcl, 1)>
INFO 0518 17:46:39.911321 base_api.py] Making http GET to https://storage.googleapis.com/storage/v1/b?alt=json&fields=nextPageToken%2Citems%2Fid&maxResults=1000&project=default_project&projection=noAcl
INFO 0518 17:46:39.911495 base_api.py] Headers: {'accept': 'application/json',
'accept-encoding': 'gzip, deflate',
'content-length': '0',
'user-agent': 'apitools Python/3.8.5 gsutil/4.62 (linux) analytics/disabled '
'interactive/True command/ls google-cloud-sdk/341.0.0'}
INFO 0518 17:46:39.911611 base_api.py] Body: (none)
INFO 0518 17:46:39.911687 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 17:46:39.911900 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:46:39.912035 multiprocess_file_storage.py] Read credential file
INFO 0518 17:46:39.912081 reauth_creds.py] Refreshing access_token
INFO 0518 17:47:40.014159 retry_util.py] Retrying request, attempt #1...
DEBUG 0518 17:47:40.014368 http_wrapper.py] Caught socket error, retrying: timed out
DEBUG 0518 17:47:40.014440 http_wrapper.py] Retrying request to url https://storage.googleapis.com/storage/v1/b?alt=json&fields=nextPageToken%2Citems%2Fid&maxResults=1000&project=default_project&projection=noAcl after exception timed out
INFO 0518 17:47:41.531516 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 17:47:41.532971 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:47:41.533422 multiprocess_file_storage.py] Read credential file
INFO 0518 17:47:41.533568 reauth_creds.py] Refreshing access_token
INFO 0518 17:48:41.590354 retry_util.py] Retrying request, attempt #2...
DEBUG 0518 17:48:41.590671 http_wrapper.py] Caught socket error, retrying: timed out
DEBUG 0518 17:48:41.590815 http_wrapper.py] Retrying request to url https://storage.googleapis.com/storage/v1/b?alt=json&fields=nextPageToken%2Citems%2Fid&maxResults=1000&project=default_project&projection=noAcl after exception timed out
INFO 0518 17:48:46.107598 transport.py] Attempting refresh to obtain initial access_token
DEBUG 0518 17:48:46.108518 multiprocess_file_storage.py] Read credential file
DEBUG 0518 17:48:46.108928 multiprocess_file_storage.py] Read credential file
INFO 0518 17:48:46.109037 reauth_creds.py] Refreshing access_token
^CDEBUG: Exception stack trace:
NoneType: None
DEBUG: Caught CTRL-C (signal 2) - Exception stack trace:
File "/home/username/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
gsutil.RunMain()
File "/home/username/google-cloud-sdk/platform/gsutil/gsutil.py", line 122, in RunMain
sys.exit(gslib.__main__.main())
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 435, in main
return _RunNamedCommandAndHandleExceptions(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 631, in _RunNamedCommandAndHandleExceptions
return command_runner.RunNamedCommand(command_name,
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 410, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 568, in RunCommand
for blr in self.WildcardIterator(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 484, in IterBuckets
for blr in self._ExpandBucketWildcards(bucket_fields=bucket_fields):
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 400, in _ExpandBucketWildcards
for bucket in self.gsutil_api.ListBuckets(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/gcs_json_api.py", line 703, in ListBuckets
bucket_list = self.api_client.buckets.List(apitools_request,
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/third_party/storage_apitools/storage_v1_client.py", line 362, in List
return self._RunMethod(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 734, in _RunMethod
http_response = http_wrapper.MakeRequest(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/http_wrapper.py", line 348, in MakeRequest
return _MakeRequestNoRetry(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/apitools/apitools/base/py/http_wrapper.py", line 397, in _MakeRequestNoRetry
info, content = http.request(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/vendored/oauth2client/oauth2client/transport.py", line 159, in new_request
credentials._refresh(orig_request_method)
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/vendored/oauth2client/oauth2client/client.py", line 761, in _refresh
self._do_refresh_request(http)
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/reauth_creds.py", line 112, in _do_refresh_request
self._update(*reauth.refresh_access_token(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/reauth.py", line 267, in refresh_access_token
response, content = _reauth_client.refresh_grant(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/_reauth_client.py", line 147, in refresh_grant
response, content = http_request(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/google-reauth-python/google_reauth/reauth_creds.py", line 105, in http_request
response, content = transport.request(
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/vendored/oauth2client/oauth2client/transport.py", line 280, in request
return http_callable(uri, method=method, body=body, headers=headers,
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1985, in request
(response, content) = self._request(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1650, in _request
(response, content) = self._conn_request(
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1557, in _conn_request
conn.connect()
File "/home/username/google-cloud-sdk/platform/gsutil/third_party/httplib2/python3/httplib2/__init__.py", line 1324, in connect
sock.connect((self.host, self.port))
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/sig_handling.py", line 92, in _SignalHandler
_final_signal_handlers[signal_num](signal_num, cur_stack_frame)
File "/home/username/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 519, in _HandleControlC
stack_trace = ''.join(traceback.format_list(traceback.extract_stack()))

I have the same exact error, and I found out that its due to my machine resolving hosts into ipv6 which causes timeouts, so in ubuntu I just have to disable ipv6 and restart, after that YAY! it works now I can gsutil.
for some other to encounter same issue try doing this. :)

After giving up on this I decided to reinstall one last time the whole google-cloud-sdk suite, but this time using the snap version. Installing it via snap solved the issue for me. I think this points to some issue with my environment that was bypassed thanks to the snap containerization.
So no clear answer here, but if anyone is experiencing the same problem giving a chance to snap may solve the issue as it did for me

Related

Why end point returning below error while processing request?

Details:- Have added Datamapper in-process module of my wso2 project. But when I send request JSON using command prompt to my back-end service I get below error from the endpoint.
--In console window of Integration studio.
Details:- From below logs, I can say it pass through a log module just before endpoint.
[2020-02-18 15:25:14,521] INFO {org.apache.synapse.mediators.builtin.LogMediator} - message = Routing to clemency medical center
[2020-02-18 15:46:22,301] INFO {org.apache.synapse.mediators.builtin.LogMediator} - message = Routing to clemency medical center
---In Command Prompt getting error:-
F:\WS02\WSO2 Integration Studio\Request_JSON\HelathCare\Transforming Message Content>curl -v -X POST --data #request.json http://localhost:8280/healthcare/categories/surgery/reserve --header "Content-Type:application/json"
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying ::1...
* TCP_NODELAY set
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8280 (#0)
> POST /healthcare/categories/surgery/reserve HTTP/1.1
> Host: localhost:8280
> User-Agent: curl/7.55.1
> Accept: */*
> Content-Type:application/json
> Content-Length: 200
>
* upload completely sent off: 200 out of 200 bytes
< HTTP/1.1 500 Internal Server Error
< Accept-Ranges: none
< Access-Control-Allow-Methods: POST
< Set-Cookie: SERVERID=s0; path=/
< Access-Control-Allow-Headers: content-type
< Content-Type: application/octet-stream
< Via: HTTP/1.1 forward.http.proxy:8080
< Date: Tue, 18 Feb 2020 10:16:27 GMT
< Transfer-Encoding: chunked
<
Error in executing request: POST /clemency/categories/surgery/reserve* Connection #0 to host localhost left intact
Below are request and response JSON content have used.
Request content (client requested content in below format)
{
"name": "John Doe",
"dob": "1940-03-19",
"ssn": "234-23-525",
"address": "California",
"phone": "8770586755",
"email": "johndoe#gmail.com",
"doctor": "thomas collins",
"hospital": "grand oak community hospital"
}
The response we expect using data mapper from back end service.
{
"patient": {
"name": "John Doe",
"dob": "1990-03-19",
"ssn": "234-23-525",
"address": "California",
"phone": "8770586755",
"email": "johndoe#gmail.com"
},
"doctor": "thomas collins",
"hospital": "grand oak community hospital"
}
In case of Using the data mapper approach just be sure on input and output schema you are providing and after successfully mapping input and output for conversion from XML to JSON vice versa. Please ensure the properties of Data-Mapper make input and output as per your requirements.
By default it set to XML to XML.

Why the APIMCLI import process give me 403 Forbidden?

I am getting "403" forbidden when I try to import an API for my distributed APIM instance.
I have 4 VMs running centos 7 and JDK 8:
1st - PostgreSQL
2nd - WSO2 IS with Key manager
3rd - WSO2 APIMMANAGER (2 instances - APIMStore and APIMPublisher)
4th - WSO2 APIMWORKER (2 instances - APIMGateway and APIMTrafficManager)
1- After start all servers OK, I create an 'env' for APIMCLI as follows:
apimcli add-env -n apimm_hml --registration https://apimmanager:9444/client-registration/v0.14/register --apim https://apimmanager:9444 --token https://apimmanager:8244/token --import-export https://apimmanager:9444/api-import-export-2.6.0-v2 --admin https://apimmanager:9444/api/am/admin/v0.14 --api_list https://apimmanager:9444/api/am/publisher/v0.14/apis --app_list https://apimmanager:9444/api/am/store/v0.14/applications
2- I had add my exported APIs to $ .wso2apimcli/esported/apis
3- I get Ok the Tokken from APIM
curl -X POST -c cookies http://apimmanager:9764/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin' -k -v
* About to connect() to apimmanager port 9764 (#0)
* Trying 10.61.1.68...
* Connected to apimmanager (10.61.1.68) port 9764 (#0)
> POST /publisher/site/blocks/user/login/ajax/login.jag HTTP/1.1
> User-Agent: curl/7.29.0
> Host: apimmanager:9764
> Accept: */*
> Content-Length: 42
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 200 OK
< X-Frame-Options: DENY
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-store, no-cache, must-revalidate, private
* Added cookie JSESSIONID="E97A1EC2A610C0985E9149C6AEDB0FC9AAF492239437DB11D6A64F0ADBB3CA2424437A19ED8A51409F453D1E53640A547E186AC3810235AD7761DE58093C432314B3D46DE5B353562FBCFEB3268A6084945840CD1083330A69B8564068B92A39B17714D2F94807129392AB6EDFE10CB19EC4ED87E514B31E09D19991F6D6938A" for domain apimmanager, path /publisher, expire 0
< Set-Cookie: JSESSIONID=E97A1EC2A610C0985E9149C6AEDB0FC9AAF492239437DB11D6A64F0ADBB3CA2424437A19ED8A51409F453D1E53640A547E186AC3810235AD7761DE58093C432314B3D46DE5B353562FBCFEB3268A6084945840CD1083330A69B8564068B92A39B17714D2F94807129392AB6EDFE10CB19EC4ED87E514B31E09D19991F6D6938A; Path=/publisher; HttpOnly
< Content-Type: application/json;charset=UTF-8
< Content-Length: 17
< Date: Mon, 21 Oct 2019 14:32:47 GMT
< Server: WSO2 Carbon Server
<
* Connection #0 to host apimmanager left intact
{"error" : false}
4- I get 403 forbiden afte try to import an API:
apimcli import-api -f APIM_ABC_v1.0.zip -e apimm_hml -u admin -p admin -k --preserve-provider=false --verbose
[INFO]: Insecure: true
[INFO]: import-api called
[INFO]: Environment: 'apimm_hml'
[INFO]: Import URL: https://apimmanager:9444/api-import-export-2.6.0-v2/import-api?preserveProvider=false
[INFO]: Source Environment: ConsentimentoService_v1.0.zip
ZipFilePath: /home/centos/.wso2apimcli/exported/apis/ConsentimentoService_v1.0.zip
Error importing API.
Status: 403 Forbidden
Error importing API
[ERROR]: 403 Forbidden
From APIMPublisher Log file I get:
WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:<anonymous>, ip:10.61.1.68, method:POST, uri:/api-import-export-2.6.0-v2/import-api, error:required token is missing from the request) {org.owasp.csrfguard.log.JavaLogger}
After the creation of the environment (ie step 1) try to login to the environment using the below command
apimcli login <environment> -u <username> -p <password>
in your case..
apimcli login apimm_hml -u admin -p admin -k

AWS C++ S3 SDK PutObjectRequest unable to connect to endpoint

In working with the AWS C++ SDK I ran into an issue where trying to execute a PutObjectRequest complains that it is "unable to connect to endpoint" when uploaded more than ~400KB.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
Aws::S3::S3Client s3Client(clientConfig);
Aws::S3::Model::PutObjectRequest putObjectRequest;
putObjectRequest.SetBucket("mybucket");
putObjectRequest.SetKey("mykey");
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> stmbuf(compressedData, dataSize);
std::iostream *stm = new std::iostream(&stmbuf);
putObjectRequest.SetBody(std::shared_ptr<Aws::IOStream>(stm));
putObjectRequest.SetContentLength(dataSize);
Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(putObjectRequest);
As long as my data is less than ~400KB it gets uploaded into a file on S3 but beyond that it is unable to connect to endpoint. I should be able to upload up to 5GB in one PutObjectRequest.
Any thoughts?
Edit:
Responding to #JonathanHenson's comment, the AWS log shows this timeout error repeatedly:
[DEBUG] 2016-08-04 13:42:03 AWSClient [0x700000081000] Request Successfully signed
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Making request to https://s3.amazonaws.com/mybucket/myfile
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Including headers:
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] content-length: 3151261
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] content-type: binary/octet-stream
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] host: s3.amazonaws.com
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] user-agent: aws-sdk-cpp/0.13.9 Darwin/15.6.0 x86_64
[DEBUG] 2016-08-04 13:42:03 CurlHandleContainer [0x700000081000] Attempting to acquire curl connection.
[DEBUG] 2016-08-04 13:42:03 CurlHandleContainer [0x700000081000] Returning connection handle 0x10b09cc00
[DEBUG] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Obtained connection handle 0x10b09cc00
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] HTTP/1.1 100 Continue
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000]
[ERROR] 2016-08-04 13:42:06 CurlHttpClient [0x700000081000] Curl returned error code 28
[DEBUG] 2016-08-04 13:42:06 CurlHandleContainer [0x700000081000] Releasing curl handle 0x10b09cc00
[DEBUG] 2016-08-04 13:42:06 CurlHandleContainer [0x700000081000] Notifying waiting threads.
[DEBUG] 2016-08-04 13:42:06 AWSClient [0x700000081000] Request returned error. Attempting to generate appropriate error codes from response
[WARN] 2016-08-04 13:42:06 AWSClient [0x700000081000] Request failed, now waiting 12800 ms before attempting again.
[DEBUG] 2016-08-04 13:42:19 InstanceProfileCredentialsProvider [0x700000081000] Checking if latest credential pull has expired.
Ultimately what fixed this for me was setting the request timeout. The request time out needs to be long enough for your entire transfer to finish. If you are transferring large files on a slow internet connection make sure the request timeout is long enough to allow those files to transfer.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
clientConfig.connectTimeoutMs = 30000;
clientConfig.requestTimoutMs = 600000;
Tweak your config file to below.And see it will work.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
clientConfig.connectTimeoutMs = 30000;
Aws::S3::S3Client s3Client(clientConfig);
Aws::S3::Model::PutObjectRequest putObjectRequest;
putObjectRequest.SetBucket("mybucket");
putObjectRequest.SetKey("mykey");
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> stmbuf(compressedData, dataSize);
std::iostream *stm = new std::iostream(&stmbuf);
putObjectRequest.SetBody(std::shared_ptr<Aws::IOStream>(stm));
putObjectRequest.SetContentLength(dataSize);
Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(putObjectRequest);

Chaos Monkey:AWS EC2: HTTP/1.1 401 Unauthorized error on connecting to eu-central-1(frankfurt)

We are using Chaos monkey for resilience testing on AWS ec2 clients. When chaos monkey is trying to authenticate with the given secret key for frankfurt region, we are getting HTTP/1.1 401 Unauthorized error.We are using AWS SDK for Java.
It is using signature version v2 and eu-central-1(frankfurt) requires v4.
How can we set signature version to v4 or what setting we have to make to AWS SDK?
Here is the snapshot of the error:-
2015-01-19 05:37:05.028 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] Sending request -1804412292: POST https://ec2.eu-central-1.amazonaws.com/ HTTP/1.1
2015-01-19 05:37:05.029 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] >> "Action=DescribeInstances&Signature=Ao5fLfM%2B/rOcbdll0LF0K2F9U8NBlgd%2BAwuFk83GOxo%3D&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2015-01-19T10%3A37%3A05.024Z&Version=2010-06-15&AWSAccessKeyId=xxx"
2015-01-19 05:37:05.029 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] >> POST https://ec2.eu-central-1.amazonaws.com/ HTTP/1.1
2015-01-19 05:37:05.029 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] >> Host: ec2.eu-central-1.amazonaws.com
2015-01-19 05:37:05.030 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] >> Content-Type: application/x-www-form-urlencoded
2015-01-19 05:37:05.030 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] >> Content-Length: 225
2015-01-19 05:37:05.609 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] Receiving response -1804412292: HTTP/1.1 401 Unauthorized
2015-01-19 05:37:05.610 - DEBUG SLF4JLogger - [SLF4JLogger.java:61] << HTTP/1.1 401 Unauthorized
Setting the v4 signature is explained here. The relevant bits for Java:
Add the following in your code.
System.setProperty(SDKGlobalConfiguration.ENABLE_S3_SIGV4_SYSTEM_PROPERTY, "true");
Or, on the command line, specify the following.
-Dcom.amazonaws.services.s3.enableV4

Sending emails using django-SES (Amazon SES)

I've been trying to configure the django-SES services to send outgoing emails and am not sure what is wrong here. Am receving a strange error message,
settings.py
# Email Configuration using Amazon SES Services
EMAIL_BACKEND = 'django_ses.SESBackend'
# These are optional -- if they're set as environment variables they won't
# need to be set here as well
AWS_SES_ACCESS_KEY_ID = 'xxxxxxx'
AWS_SES_SECRET_ACCESS_KEY = 'xxxxxxxxxxxxx'
# Additionally, you can specify an optional region, like so:
AWS_SES_REGION_NAME = 'us-east-1'
AWS_SES_REGION_ENDPOINT = 'email-smtp.us-east-1.amazonaws.com'
In my design, am inserting all emails into a table and then using celery task to go through all pending emails and firing them.
here is my tasks.py
#task(name='common_lib.send_notification', ignore_result=True)
#transaction.commit_manually
def fire_pending_email():
try:
Notification = get_model('common_lib', 'Notification')
NotificationEmail = get_model('common_lib', 'NotificationEmail')
pending_notifications=Notification.objects.values_list('id', flat=True).filter(status=Notification.STATUS_PENDING)
for email in NotificationEmail.objects.filter(notification__in=pending_notifications):
msg = EmailMultiAlternatives(email.subject, email.text_body, 'noreply#xx.com.xx', [email.send_to, ])
if email.html_body:
msg.attach_alternative(email.html_body, "text/html")
msg.send()
transaction.commit()
return 'Successful'
except Exception as e:
transaction.rollback()
logging.error(str(e))
finally:
pass
yet in the celery debug console am seeing the following error
[2012-11-13 11:45:28,061: INFO/MainProcess] Got task from broker: common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384]
[2012-11-13 11:45:28,069: DEBUG/MainProcess] Mediator: Running callback for task: common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384]
[2012-11-13 11:45:28,069: DEBUG/MainProcess] TaskPool: Apply <function trace_task_ret at 0x9f38a3c> (args:('common_lib.send_notification', '4dc71dee-fc7c-4ddc-a02c-4097c73e4384', [], {}, {'retries': 0, 'is_eager': False, 'task': 'common_lib.send_notification', 'group': None, 'eta': None, 'delivery_info': {'priority': None, 'routing_key': u'celery', 'exchange': u'celery'}, 'args': [], 'expires': None, 'callbacks': None, 'errbacks': None, 'hostname': 'ubuntu', 'kwargs': {}, 'id': '4dc71dee-fc7c-4ddc-a02c-4097c73e4384', 'utc': True}) kwargs:{})
[2012-11-13 11:45:28,077: DEBUG/MainProcess] Task accepted: common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384] pid:8256
[2012-11-13 11:45:28,097: DEBUG/MainProcess] (0.001) SELECT `common_lib_notification_email`.`id`, `common_lib_notification_email`.`notification_id`, `common_lib_notification_email`.`send_to`, `common_lib_notification_email`.`template`, `common_lib_notification_email`.`subject`, `common_lib_notification_email`.`html_body`, `common_lib_notification_email`.`text_body` FROM `common_lib_notification_email` WHERE `common_lib_notification_email`.`notification_id` IN (SELECT U0.`id` FROM `common_lib_notification` U0 WHERE U0.`status` = 'P' ); args=(u'P',)
[2012-11-13 11:45:28,103: DEBUG/MainProcess] Method: POST
[2012-11-13 11:45:28,107: DEBUG/MainProcess] Path: /
[2012-11-13 11:45:28,107: DEBUG/MainProcess] Data: Action=GetSendQuota
[2012-11-13 11:45:28,107: DEBUG/MainProcess] Headers: {'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}
[2012-11-13 11:45:28,109: DEBUG/MainProcess] Host: email-smtp.us-east-1.amazonaws.com
[2012-11-13 11:45:28,109: DEBUG/MainProcess] establishing HTTPS connection: host=email-smtp.us-east-1.amazonaws.com, kwargs={}
[2012-11-13 11:45:28,109: DEBUG/MainProcess] Token: None
[2012-11-13 11:45:28,702: DEBUG/MainProcess] wrapping ssl socket; CA certificate file=/home/mo/projects/garageenv/local/lib/python2.7/site-packages/boto/cacerts/cacerts.txt
[2012-11-13 11:45:29,385: DEBUG/MainProcess] validating server certificate: hostname=email-smtp.us-east-1.amazonaws.com, certificate hosts=[u'email-smtp.us-east-1.amazonaws.com']
[2012-11-13 11:45:39,618: ERROR/MainProcess] <unknown>:1:0: syntax error
[2012-11-13 11:45:39,619: INFO/MainProcess] Task common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384] succeeded in 11.5491399765s: None
UPDATE
when I changed the setting to
AWS_SES_REGION_ENDPOINT = 'email.us-east-1.amazonaws.com'
I got a different error, as below
[2012-11-13 13:24:05,907: DEBUG/MainProcess] Method: POST
[2012-11-13 13:24:05,916: DEBUG/MainProcess] Path: /
[2012-11-13 13:24:05,917: DEBUG/MainProcess] Data: Action=GetSendQuota
[2012-11-13 13:24:05,917: DEBUG/MainProcess] Headers: {'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}
[2012-11-13 13:24:05,918: DEBUG/MainProcess] Host: email.us-east-1.amazonaws.com
[2012-11-13 13:24:05,918: DEBUG/MainProcess] establishing HTTPS connection: host=email.us-east-1.amazonaws.com, kwargs={}
[2012-11-13 13:24:05,919: DEBUG/MainProcess] Token: None
[2012-11-13 13:24:06,511: DEBUG/MainProcess] wrapping ssl socket; CA certificate file=/home/mo/projects/garageenv/local/lib/python2.7/site-packages/boto/cacerts/cacerts.txt
[2012-11-13 13:24:06,952: DEBUG/MainProcess] validating server certificate: hostname=email.us-east-1.amazonaws.com, certificate hosts=['email.us-east-1.amazonaws.com', 'email.amazonaws.com']
[2012-11-13 13:24:07,177: ERROR/MainProcess] 403 Forbidden
[2012-11-13 13:24:07,178: ERROR/MainProcess] <ErrorResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
<Error>
<Type>Sender</Type>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message>
</Error>
<RequestId>41c15592-2d7c-11e2-a590-f33d1568f3ea</RequestId>
</ErrorResponse>
[2012-11-13 13:24:07,180: ERROR/MainProcess] BotoServerError: 403 Forbidden
<ErrorResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
<Error>
<Type>Sender</Type>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message>
</Error>
<RequestId>41c15592-2d7c-11e2-a590-f33d1568f3ea</RequestId>
</ErrorResponse>
[2012-11-13 13:24:07,184: INFO/MainProcess] Task common_lib.send_notification[3b6a049e-d5cb-45f4-842b-633d816a132e] succeeded in 1.31089687347s: None
Can you try using this
AWS_SES_REGION_ENDPOINT = 'email.us-east-1.amazonaws.com'
And not the smtp server setting on AWS's dashboard?
(You used AWS_SES_REGION_ENDPOINT = 'email-smtp.us-east-1.amazonaws.com' as mentioned above)
Once you have updated this, you got a new error as updated in your question. This confirms that you now have the correct AWS_SES_REGION_ENDPOINT setting set.
The reason you are getting this new error is most likely because you are confusing the access keys and giving amazon a wrong set of credentials - see detailed comments here - https://github.com/boto/boto/issues/476#issuecomment-7679158
Follow the solution prescribed in the comment and you should be fine, I think.