Boto3 EndpointConnectionError: Could not connect to endpoint (that was previously working fine) - amazon-web-services

I upgraded to boto3 a few months ago. These operations always used to work properly. To my knowledge nothing changed, but recently this error has been occurring when I try to reach the aws servers.
client = boto3.client(
'mturk',
aws_access_key_id = key,
aws_secret_access_key = secret_key,
endpoint_url= r"https://mturk-requester.us-east-1.amazonaws.com/")
client.get_hit(HITId=hit.id)
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL:
"https://mturk-requester.us-east-1.amazonaws.com/"
This now happens when posting hits, checking my balance, etc. All these operations were originally working as intended.
My awscli is configured to
[default]
region=us-east-1

Seems to work fine for me like this:
client = boto3.client('mturk',region_name='us-east-1')

Related

Can I set AWS credentials on a Spring Boot/Cloud #SqsListener? (Java)

Double newbie here, to both SQS and Spring Cloud. I've created (using the console) an SQS queue. The company wiki I'm working from says then to generate temporary credentials, which come out looking like this:
aws_access_key_id = <secret>
aws_secret_access_key = <secret>
region = us-west-2
aws_session_token = <secret and VERY LONG, like 240 characters>
NOTE: more on that "aws_session_token" later.
So, once I have done that, I can send a message from the CLI, like this.
`aws --endpoint-url https://sqs.us-west-2.amazonaws.com/99999999999999/<queue name>.fifo sqs send-message --queue-url https://sqs.us-west-2.amazonaws.com/99999999999999/<queue name>.fifo --message-body "cli test msg 2" --message-group-id "azgroup"`
So far so good. But now, I want to implement an SqsListener to listen continuously. So, I checked out the code here https://github.com/sixthpoint/spring-boot-sqs-fifo-tutorial, which is a minimal Spring Cloud SQS application, and set all the configs as shown in the readme. My listener, right now, looks simply like this:
#SqsListener(value=SQSURL)
public void process(String json) throws IOException {
System.out.println("here");
System.out.println(json);
}
But, when I try to start the application up, I get this error:
com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid. (Service: AmazonSQS; Status Code: 403; Error Code: InvalidClientTokenId; Request ID:....)
I think what's going on is that at startup, the listener is trying to contact my queue, and is being rejected because it's not sending that aws_session_token. (The company wiki, again, says this: "You will see aws_session_token. This is something you have not had before. It is required for your key to work!")
So, is there a way to explicitly set my AWS parameters, either in the Java code where the #SqsListener is defined, or somewhere in configs, such that the aws_session_token gets passed? It doesn't seem possible to pass an AwsCredentials object. (edit) And it doesn't seem that that would help me anyway, since AwsCredentials doesn't contain that field.
Or . . . is there some other way of solving this?
Answering, or at least partially answering, my own question: It turns out that the aws_session_token is required when, and only when, using temporary aws credentials, which as I noted is what I've been given to work with. It has to be added to any CLI operations, but there is no way to set it the AwsCredentials object in Java code. So that's not going to help me. It may just not be possible to connect from Java code when using temporary credentials. If I'm wrong and there is a way, please let me know.

AWS SES: Can't send emails from any one of the new regions

I have the problem that i can't send emails from the new aws ses environments, which were introduced a month ago.
All the old ones are working fine (e.g. us-east-1, us-west-2, eu-west-1).
But if I want to send a mail from one of the new environments, e.g. eu-central-1, I just get the error message:
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
But this can't be the case, because all the old ones are working fine with the same keys.
Therefore I would really appreciate it if sb else could test the sample code with their account to check if they have the same issue.
The new environments are eu-central-1, ap-south-1 and ap-southeast-2. Endpoint Urls
Sample Code:
var ses = require('node-ses');
var client = ses.createClient({ key: '', secret: '', amazon: 'https://email.eu-central-1.amazonaws.com'});
async function sendMessage() {
let options = {};
options.from = "test#aol.com";
options.to = "test2#aol.com";
options.subject = "TestMail";
options.message = "Test";
console.log("Try to sendMessage");
client.sendEmail(options, function (err, data, res) {
console.log("Error: " + JSON.stringify(err));
console.log("Data: " + data);
console.log("res: " + res);
});
}
sendMessage();
The sample code uses the node-ses npm package and you just need to enter aws iam user credentials, which have ses access.
If you want to check different regions, you have to change url in the createClient constructor.
Dont worry, the sample code does not send an email!!!
If the region is working, it should throw an error message similar to this: Email address is not verified. The following identities failed the check in region EU-WEST-1: test#aol.com, test2#aol.com"
Otherwise the error will be the one described above.
I also have to mention that I am currently still in sandbox mode, so maybe the new regions are blocked for sandbox users?
It's because you must be creating the SES credentials from the IAM console . You should instead create the credentials using the SES interface/console.
Follow this article to create smtp credentials using SES interface:
http://docs.amazonwebservices.com/ses/latest/GettingStartedGuide/GetAccessIDs.html.

Cassandra python driver: Client request timeout

I setup a simple script to insert a new record into a Cassandra database. It works fine on my local machine, but I am getting timeout errors from the client when I moved the database to a remote machine. How do I properly set the timeout for this driver? I have tried many things. I hacked the timeout in my IDE and got it to work without timing out, so I know for sure its just a timeout problem.
How I setup my Cluster:
profile = ExecutionProfile(request_timeout=100000)
self.cluster = Cluster([os.getenv('CASSANDRA_NODES', None)], auth_provider=auth_provider,
execution_profiles={EXEC_PROFILE_DEFAULT: profile})
connection.setup(hosts=[os.getenv('CASSANDRA_SEED', None)],
default_keyspace=os.getenv('KEYSPACE', None),
consistency=int(os.getenv('CASSANDRA_SESSION_CONSISTENCY', 1)), auth_provider=auth_provider,
connect_timeout=200)
session = self.cluster.connect()
The query I am trying to perform:
model = Model.create(buffer=_buffer, lock=False, version=self.version)
13..': 'Client request timeout. See Session.execute_async'}, last_host=54.213..
The record I'm inserting is 11mb, so I can understand there is a delay, just increasing the timeout should do it, but I can't seem to figure it out.
The default request timeout is an attribute of the Session object (version 2.0.0 of the driver and later).
session = cluster.connect(keyspace)
session.default_timeout = 60
This is the simplest answer (no need to mess about with an execution profile), and I have confirmed that it works.
https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.Session
You can set request_timeout in the Cluster constructor:
self.cluster = Cluster([os.getenv('CASSANDRA_NODES', None)],
auth_provider=auth_provider,
execution_profiles={EXEC_PROFILE_DEFAULT: profile},
request_timeout=10)
Reference: https://datastax.github.io/python-driver/api/cassandra/cluster.html
Based on the documentation, request_timeout is an attribute of ExecutionProfile class, and you can give an execution profile to the cluster constructor (this is an example).
So, you can do:
from cassandra.cluster import Cluster
from cassandra.cluster import ExecutionProfile
execution_profil = ExecutionProfile(request_timeout=600)
profiles = {'node1': execution_profil}
cluster = Cluster([os.getenv('CASSANDRA_NODES', None)], execution_profiles=profiles)
session = cluster.connect()
session.execute('SELECT * FROM test', execution_profile='node1')
Important: when you use execute or èxecute_async, you have to specify the execution_profile name.

Spark is inventing his own AWS secretKey

I'm trying to read a s3 bucket from Spark and up until today Spark always complain that the request return 403
hadoopConf = spark_context._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3a.access.key", "ACCESSKEY")
hadoopConf.set("fs.s3a.secret.key", "SECRETKEY")
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
logs = spark_context.textFile("s3a://mybucket/logs/*)
Spark was saying .... Invalid Access key [ACCESSKEY]
However with the same ACCESSKEY and SECRETKEY this was working with aws-cli
aws s3 ls mybucket/logs/
and in python boto3 this was working
resource = boto3.resource("s3", region_name="us-east-1")
resource.Object("mybucket", "logs/text.py") \
.put(Body=open("text.py", "rb"),ContentType="text/x-py")
so my credentials ARE invalid and the problem is definitely something with Spark..
Today I decided to turn on the "DEBUG" log for the entire spark and to my suprise... Spark is NOT using the [SECRETKEY] I have provided but instead... add a random one???
17/03/08 10:40:04 DEBUG request: Sending Request: HEAD https://mybucket.s3.amazonaws.com / Headers: (Authorization: AWS ACCESSKEY:[RANDON-SECRET-KEY], User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.65-b01/1.8.0_65, Date: Wed, 08 Mar 2017 10:40:04 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, )
This is why it still return 403! Spark is not using the key I provide with fs.s3a.secret.key but instead invent a random one??
For the record I'm running this locally on my machine (OSX) with this command
spark-submit --packages com.amazonaws:aws-java-sdk-pom:1.11.98,org.apache.hadoop:hadoop-aws:2.7.3 test.py
Could some one enlighten me on this?
(updated as my original one was downvoted as clearly considered unacceptable)
The AWS auth protocol doesn't send your secret over the wire. It signs the message. That's why what you see isn't what you passed in.
For further information, please reread.
I ran into a similar issue. Requests that were using valid AWS credentials returned a 403 Forbidden, but only on certain machines. Eventually I found out that the system time on those particular machines were 10 minutes behind. Synchronizing the system clock solved the problem.
Hope this helps!
It is very intriguing this random passkey. Maybe AWS SDK is getting the password from OS environment.
In hadoop 2.8, the default AWS provider chain shows the following list of providers:
BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider SharedInstanceProfileCredentialsProvider
Order, of course, matters! the AWSCredentialProviderChain, get the first keys from the first provider that provides that information.
if (credentials.getAWSAccessKeyId() != null &&
credentials.getAWSSecretKey() != null) {
log.debug("Loading credentials from " + provider.toString());
lastUsedProvider = provider;
return credentials;
}
See the code in "GrepCode for AWSCredentialProviderChain".
I face similar problem using profile credentials. SDK was ignoring the credentials inside ~/.aws/credentials (as good practice, I encourage you to not store credentials inside the program in any way).
My solution...
Set the credentials provider to use ProfileCredentialsProvider
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com") # yes, I am using central eu server.
sc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', 'com.amazonaws.auth.profile.ProfileCredentialsProvider')
Folks, go for the IAM configuration based on Roles ... that will open up S3 access policies that should be added to the EMR default one.

Celery with Amazon SQS

I want to use Amazon SQS as broker backed of Celery. There’s the SQS transport implementation for Kombu, which Celery depends on. However there is not enough documentation for using it, so I cannot find how to configure SQS on Celery. Is there somebody that had succeeded to configure SQS on Celery?
I ran into this question several times but still wasn't entirely sure how to setup Celery to work with SQS. It turns out that it is quite easy with the latest versions of Kombu and Celery. As an alternative to the BROKER_URL syntax mentioned in another answer, you can simply set the transport, options, user, and password like so:
BROKER_TRANSPORT = 'sqs'
BROKER_TRANSPORT_OPTIONS = {
'region': 'us-east-1',
}
BROKER_USER = AWS_ACCESS_KEY_ID
BROKER_PASSWORD = AWS_SECRET_ACCESS_KEY
This gets around a purported issue with the URL parser that doesn't allow forward slashes in your API secret, which seems to be a fairly common occurrence with AWS. Since there didn't seem to be a wealth of information out there about the topic yet, I also wrote a short blog post on the topic here:
http://www.caktusgroup.com/blog/2011/12/19/using-django-and-celery-amazon-sqs/
I'm using Celery 3.0 and was getting deprecation warnings when launching the worker with the BROKER_USER / BROKER_PASSWORD settings.
I took a look at the SQS URL parsing in kombo.utils.url._parse_url and it is calling urllib.unquote on the username and password elements of the URL.
So, to workaround the issue of secret keys with forward slashes, I was able to successfully use the following for the BROKER_URL:
import urllib
BROKER_URL = 'sqs://%s:%s#' % (urllib.quote(AWS_ACCESS_KEY_ID, safe=''),
urllib.quote(AWS_SECRET_ACCESS_KEY, safe=''))
I'm not sure if access keys can ever have forward slashes in them but it doesn't hurt to quote it as well.
For anybody stumbling upon this question, I was able to get Celery working out-of-the-box with SQS (no patching required), but I did need to update to the latest versions of Celery and Kombu for this to work (1.4.5 and 1.5.1 as of now). Use the config lines above and it should work (although you'll probably want to change the default region).
Gotcha: in order to use the URL format above, you need to make sure your AWS secret doesn't contain slashes, as this confuses the URL parser. Just keep generating new secrets until you get one without a slash.
Nobody answered about this. Anyway I tried to configure Celery with Amazon SQS, and it seems I achieved a small success.
Kombu should be patched for this, so I wrote some patches and there is my pull request as well. You can configure Amazon SQS by setting BROKER_URL of sqs:// scheme in Celery on the patched Kombu. For example:
BROKER_URL = 'sqs://AWS_ACCESS:AWS_SECRET#:80//'
BROKER_TRANSPORT_OPTIONS = {
'region': 'ap-northeast-1',
'sdb_persistence': False
}
I regenerated the credentials in the IAM consonle until I got a key without a slash (/). The parsing issues are only with that character, so if your secret doesn't have one you'll be fine.
Not the most terribly elegant solution, but definitely keeps the code clean of hacks.
Update for Python 3, removing backslashes from the AWS KEY.
from urllib.parse import quote_plus
BROKER_URL = 'sqs://{}:{}#'.format(
quote_plus(AWS_ACCESS_KEY_ID),
quote_plus(AWS_SECRET_ACCESS_KEY)
)
I was able to configure SQS on celery 4.3 (python 3.7) by using kombu.
from kombu.utils.url import quote
CELERY_BROKER_URL = 'sqs://{AWS_ACCESS_KEY_ID}:{AWS_SECRET_ACCESS_KEY}#'.format(
AWS_ACCESS_KEY_ID=quote(AWS_ACCESS_KEY_ID, safe=''),
AWS_SECRET_ACCESS_KEY=quote(AWS_SECRET_ACCESS_KEY, safe='')
)