gremlinpython client.submitAsync() fails when deployed to Azure as App Service - flask

I have a Python Flask app that I am able to run locally. I have been able to run it with python 3.8 and 3.9. When running locally the app is able to connect to both a local instance of Cosmos DB using the gremlin API, as well as an instance hosted on Microsoft Azure. When I or a coworker deploy the Flask app as an Azure App Service (python 3.8.6), we get an error when trying to query cosmos DB. The stack trace and code is below. I am not sure why I am getting
[TypeError: init() missing 1 required positional argument: 'max_workers']
since ThreadPoolExecutor has default arguments for all of it's parameters. I have attempted to specify workers when I initialize the gremlin client but it makes no difference. It looks like the client will default a number of workers anyways if no value is passed in. I have also specified workers for gunicorn when running on Azure, but it does not make a difference. I am running on Windows when developing locally, but the App Service runs on Linux when deployed to Azure. The Flask app does start fine and I can hit other flask endpoints that do not query Cosmos.
gunicorn --bind=0.0.0.0 --workers=4 app:app
Stack trace:
File "/tmp/8d9a55045f9e6ed/myApp/grem.py", line xxxx, in __init__
res = self.call_graph(gremlin_query)
File "/tmp/8d9a55045f9e6ed/myApp/grem.py", line xxxx, in call_graph
callback = self.client.submitAsync(query)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/client.py", line 144, in submitAsync
return conn.write(message)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/connection.py", line 55, in write
self.connect()
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/connection.py", line 45, in connect
self._transport.connect(self._url, self._headers)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/aiohttp/transport.py", line 77, in connect
self._loop.run_until_complete(async_connect())
File "/opt/python/3.8.6/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/aiohttp/transport.py", line 67, in async_connect
self._websocket = await self._client_session.ws_connect(url, **self._aiohttp_kwargs, headers=headers)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/client.py", line 754, in _ws_connect
resp = await self.request(
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/client.py", line 520, in _request
conn = await self._connector.connect(
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 535, in connect
proto = await self._create_connection(req, traces, timeout)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 892, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 999, in _create_direct_connection
hosts = await asyncio.shield(host_resolved)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 865, in _resolve_host
addrs = await self._resolver.resolve(host, port, family=self._family)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/resolver.py", line 31, in resolve
infos = await self._loop.getaddrinfo(
File "/opt/python/3.8.6/lib/python3.8/asyncio/base_events.py", line 825, in getaddrinfo
return await self.run_in_executor(
File "/opt/python/3.8.6/lib/python3.8/asyncio/base_events.py", line 780, in run_in_executor
executor = concurrent.futures.ThreadPoolExecutor()
TypeError: __init__() missing 1 required positional argument: 'max_workers'
requirements.txt
Flask
Flask-SQLAlchemy
Flask-WTF
Flask-APScheduler
python-dateutil
azure-storage-file-share
gremlinpython
openpyxl
python-dotenv
python-logstash-async
futures
Gremlin Client Initialization
self.client = client.Client(end_point,
'g',
username=username,
password=password,
message_serializer=serializer.GraphSONSerializersV2d0()
)
Send Query To Gremlin, this line causes the Exception
callback = self.client.submitAsync(query)

I got by this issue by creating a new requirements.txt by executing pip freeze > requirements.txt against my local code. I then deployed my application with the updated file. I am thinking that azure might have been providing me with a different version of aiohttp that was not compatible with python 3.8.6 but I am not sure. In any case, providing all of these dependency definitions got me by my issue. Hopefully this helps someone else down the road.
aenum==2.2.6
aiohttp==3.7.4
APScheduler==3.8.0
async-timeout==3.0.1
attrs==21.2.0
azure-core==1.16.0
azure-cosmos==4.2.0
azure-functions==1.7.2
azure-storage-blob==12.8.1
azure-storage-file-share==12.5.0
certifi==2021.5.30
cffi==1.14.6
chardet==3.0.4
click==8.0.1
colorama==0.4.4
cryptography==3.4.7
et-xmlfile==1.1.0
Flask==1.1.2
Flask-APScheduler==1.12.2
gremlinpython==3.5.1
idna==2.10
isodate==0.6.0
itsdangerous==2.0.1
Jinja2==3.0.1
limits==1.5.1
MarkupSafe==2.0.1
msrest==0.6.21
multidict==5.1.0
neo4j==4.3.2
nest-asyncio==1.5.1
oauthlib==3.1.1
openpyxl==3.0.7
pycparser==2.20
pylogbeat==2.0.0
pyparsing==2.4.7
python-dateutil==2.8.2
python-dotenv==0.19.0
python-logstash-async==2.3.0
pytz==2021.1
PyYAML==5.4.1
requests==2.25.1
requests-oauthlib==1.3.0
six==1.16.0
tornado==6.1
typing-extensions==3.10.0.0
tzlocal==2.1
urllib3==1.26.6
Werkzeug==2.0.1
yarl==1.6.3

Please note that client.submitAsync(query) has been changed to client.submit_async, see https://github.com/apache/tinkerpop/blob/master/gremlin-python/src/main/python/gremlin_python/driver/client.py
def submitAsync(self, message, bindings=None, request_options=None):
warnings.warn(
"gremlin_python.driver.client.Client.submitAsync will be replaced by "
"gremlin_python.driver.client.Client.submit_async.",
DeprecationWarning)
return self.submit_async(message, bindings, request_options)
def submit_async(self, message, bindings=None, request_options=None):
...

Related

AWS SAM starting local api returns "Function name is required" error

We are using CDK to build our infrastructure configuration. Moreover, I create my template.yml for SAM with cdk synth <stack_name> --no-staging > template.yml if it helps. I am using AWS Toolkit to invoke/debug my lambda functions on Intellij which works fine. However, if I run sam local start-api on terminal and send a request to one of my functions then it returns an error with stacktrace;
Traceback (most recent call last):
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 2317, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1840, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1743, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/_compat.py", line 36, in reraise
raise value
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1838, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1824, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/samcli/local/apigw/local_apigw_service.py", line 203, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/samcli/commands/local/lib/local_lambda.py", line 84, in invoke
function = self.provider.get(function_name)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/samcli/lib/providers/sam_function_provider.py", line 65, in get
raise ValueError("Function name is required")
ValueError: Function name is required
This is the command I run
sam local start-api --env-vars env.json --docker-network test
which gives the output
Mounting None at http://127.0.0.1:3000/v1 [GET, OPTIONS, POST]
Mounting None at http://127.0.0.1:3000/v1/user [GET, OPTIONS, POST]
You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2020-08-22 16:32:46 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2020-08-22 16:33:03 Exception on /v1/user [OPTIONS]
And here is the env.json I am using as environment variables for my functions
{
"tenantGetV1Function54F63CB9": {
"db": "alpha",
"connectionString": "mongodb://mongo"
},
"tenantPostV1FunctionA56822D0": {
"db": "alpha",
"connectionString": "mongodb://mongo"
},
"userGetV1Function7E6E55C2": {
"db": "alpha",
"connectionString": "mongodb://mongo"
},
"userPostV1FunctionEB035EB0": {
"db": "alpha",
"connectionString": "mongodb://mongo"
}
}
I am also running Docker Desktop on macOS operating system.
EDIT: Here you can find the simplified template.yml with only one endpoint (one function definition) which is for tenantGetV1Function54F63CB9 function. It will map to GET /v1 endpoint. I didnt want include the whole template for 4 functions which makes around a thousand lines of .yml code.
https://gist.github.com/flexelem/d887136484d508e313e0a745c30a2d97
The problem solved if I create LambdaIntegration by passing the Function instance instead of its Alias instance in CDK. So, we are creating lambdas along with an alias. Then, we pass the alias to their associated Resource instance from Api Gateway.
This is the way are creating;
Function tenantGetV1Function = Function.Builder.create(this, "tenantGetV1Function")
.role(roleLambda)
.runtime(Runtime.JAVA_8)
.code(lambdaCode)
.handler("com.yolda.tenant.lambda.GetTenantHandler::handleRequest")
.memorySize(512)
.timeout(Duration.minutes(1))
.environment(environment)
.description(Instant.now().toString())
.build();
Alias tenantGetV1Alias = Alias.Builder.create(this, "tenantGetV1Alias")
.aliasName("live")
.version(tenantAdminGetV1Function.getCurrentVersion())
.provisionedConcurrentExecutions(provisionedConcurrency)
.build();
Resource v1Resource = v1Resource.addResource("{tenantId}");
v1Resource.addMethod("GET", LambdaIntegration.Builder.create(tenantGetV1Alias).build(), options);
And if I replace tenantGetV1Alias with tenantGetV1Function then sam build command successfully builds all the functions which will make sam local start-api to spin up them.
Resource v1Resource = v1Resource.addResource("{tenantId}");
v1Resource.addMethod("GET", LambdaIntegration.Builder.create(tenantGetV1Function).build(), options);
Somehow, SAM is not able to get function name property from CloudFormation templates if we assign aliases.

httplib2.ServerNotFoundError: Unable to find the server at www.googleapis.com

When i try to fetch the details of gmail using google-api's (googleapiclient,oauth2client), i am getting below error:
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/google_api_python_client-1.7.8-py2.7.egg/googleapiclient/_helpers.py", line 130, in positional_wrapper
File "/usr/local/lib/python2.7/site-packages/google_api_python_client-1.7.8-py2.7.egg/googleapiclient/discovery.py", line 224, in build
File "/usr/local/lib/python2.7/site-packages/google_api_python_client-1.7.8-py2.7.egg/googleapiclient/discovery.py", line 274, in _retrieve_discovery_doc
File "/usr/local/lib/python2.7/site-packages/httplib2-0.12.1-py2.7.egg/httplib2/__init__.py", line 2135, in request
cachekey,
File "/usr/local/lib/python2.7/site-packages/httplib2-0.12.1-py2.7.egg/httplib2/__init__.py", line 1796, in _request
conn, request_uri, method, body, headers
File "/usr/local/lib/python2.7/site-packages/httplib2-0.12.1-py2.7.egg/httplib2/__init__.py", line 1707, in _conn_request
raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
httplib2.ServerNotFoundError: Unable to find the server at www.googleapis.com
but it is working fine in my pc but not from remote location.
code:
from googleapiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
credentials = ServiceAccountCredentials.from_json_keyfile_name(
"quickstart-1551349397232-e8bcb3368ae1.json", scopes=
['https://www.googleapis.com/auth/admin.directory.group', 'https://www.googleapis.com/auth/admin.directory.user', 'https://www.googleapis.com/auth/admin.directory.domain', 'https://www.googleapis.com/auth/gmail.readonly'])
delegated_credentials = credentials.create_delegated('jango#carbonitedepot.com')
DIRECOTORY = build('admin', 'directory_v1', credentials=delegated_credentials)
try:
results = DIRECOTORY.users().list(customer='my_customer').execute()
users = results.get('users', [])
res = []
for info in users:
print(info)
res.append(info.get("primaryEmail"))
print(res)
except Exception as e:
print(e)
Any help would be much appreciated.
Thanks in advance.
I had the same issue and I searched a lot to get it fixed, turns out I was at fault here.
I don't know from where there was podman installed instead of docker and that caused the problem.
I suggest you check your docker version and make sure the latest is running at the server, otherwise, the library works fine! Please let me know if you still face this issue, would like to dig deep!
The httplib2 is likely scanning through network interface DHCP DNS nameservers (in your registry or visible to docker) and then trying to connect through a stale DNS or possible an IPv6. Patching your socket in the main routine may solve it:
# Monkey patch to force IPv4, since FB seems to hang on IPv6
import socket
old_getaddrinfo = socket.getaddrinfo
def new_getaddrinfo(*args, **kwargs):
responses = old_getaddrinfo(*args, **kwargs)
return [response
for response in responses
if response[0] == socket.AF_INET]
socket.getaddrinfo = new_getaddrinfo

How to use service account with private key file and impersonated user (DWD) in Google App Engine and local development server?

Application: Google App Engine Python standard environment
Purpose: Access Google APIs (not Cloud APIs) through the google-api-python-client, e.g. Sheets API v4, by using a service account and impersonate a user, because the app is supposed to act on behalf of this user. (2-legged auth, the user won't be asked to grant access)
I've got a setup running in production environment, but it runs only on the local development server (dev_appserver.py) for testing if a certain environment variable would be removed. I'm looking for a solution that would work without adding/removing the environment variable.
The service account was created for the app and configured with domain-wide delegation DWD in Admin Console. Sheets API is turned on for this project.
Of the many quick-starts, samples, and references available, it was only after reading the Google Auth Library for Python documentation (google-auth) that I've noticed the missing parts (an environment variable and the SSL library) and finally got the code running on production.
The app code will use the private key JSON file that was downloaded from Cloud Console IAM.
requirements.txt
# as suggested by almost all docs, but this isn't everything we need:
google-api-python-client==1.6.5
google-auth==1.4.0
google-auth-httplib2==0.0.3
app.yaml
env_variables:
# enable socket support of paid app, needed for OAuth2 service-accounts
# see google-auth documentation, v1.4.1, chapter 1.2.4
GAE_USE_SOCKETS_HTTPLIB : true
# some other stuff
libraries:
# to make HTTPS calls to other services, needed for OAuth2 service-accounts
# see google-auth documentation, v1.4.1, chapter 1.2.4
- name: ssl
version: latest
appengine_config.py (partial sample for Sheets API v4 access)
from google.oauth2 import service_account
SCOPES = ["https://www.googleapis.com/auth/spreadsheets"]
APP_ROOT_DIR = os.path.abspath(os.path.dirname(__file__))
SERVICE_ACCOUNT_FILE = "service-account-private-key.json"
import googleapiclient.discovery
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
# impersonate as user#example.com (G Suite domain account)
credentials = credentials.with_subject('user#example.com')
service = googleapiclient.discovery.build('sheets', 'v4', credentials=credentials)
# until here, the code works in production and local dev server
result = service.spreadsheets().values().get(spreadsheetId="DOC-ID-HERE", range="A1:C5").execute()
# execute() will work only in production,
# on local dev, it will raise an ResponseNotReady exception
traceback
ERROR 2018-03-05 16:32:03,183 wsgi.py:263]
Traceback (most recent call last):
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/api/lib_config.py", line 351, in __getattr__
self._update_configs()
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/api/lib_config.py", line 287, in _update_configs
self._registry.initialize()
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/api/lib_config.py", line 160, in initialize
import_func(self._modname)
File "/Users/user/git/project/gae/appengine_config.py", line 143, in <module>
spreadsheetId=spreadsheetId, range=rangeName).execute()
File "/Users/user/git/project/gae/_lib/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Users/user/git/project/gae/_lib/googleapiclient/http.py", line 839, in execute
method=str(self.method), body=self.body, headers=self.headers)
File "/Users/user/git/project/gae/_lib/googleapiclient/http.py", line 166, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/Users/user/git/project/gae/_lib/google_auth_httplib2.py", line 187, in request
self._request, method, uri, request_headers)
File "/Users/user/git/project/gae/_lib/google/auth/credentials.py", line 121, in before_request
self.refresh(request)
File "/Users/user/git/project/gae/_lib/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/Users/user/git/project/gae/_lib/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/Users/user/git/project/gae/_lib/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/Users/user/git/project/gae/_lib/google_auth_httplib2.py", line 116, in __call__
url, method=method, body=body, headers=headers, **kwargs)
File "/Users/user/git/project/gae/_lib/httplib2/__init__.py", line 1659, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/Users/user/git/project/gae/_lib/httplib2/__init__.py", line 1399, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/Users/user/git/project/gae/_lib/httplib2/__init__.py", line 1355, in _conn_request
response = conn.getresponse()
File "/Users/user/google-cloud-sdk/platform/google_appengine/google/appengine/dist27/python_std_lib/httplib.py", line 1121, in getresponse
raise ResponseNotReady()
I have figured out that if I delete GAE_USE_SOCKETS_HTTPLIB from app.yaml's env_variables list, the code will work on local development server (but not in production anymore).
Am I doing something wrong here? Could I use the same code (maybe with a small switch) for both environments, without manually adding/removing the variable from app.yaml?
Purpose: Access Google APIs (not Cloud APIs) through the google-api-python-client, e.g. Sheets API v4, ….
Here they explain that:
Private, broadcast, multicast, and Google IP ranges (except those whitelisted below), are blocked:
Google Public DNS: 8.8.8.8, 8.8.4.4, 2001:4860:4860::8888, 2001:4860:4860::8844 port 53
Gmail SMTPS: smtp.gmail.com port 465 and 587
Gmail POP3S: pop.gmail.com port 995
Gmail IMAPS: imap.gmail.com port 993
I have figured out that if I delete GAE_USE_SOCKETS_HTTPLIB from app.yaml's env_variables list, the code will work on local development server (but not in production anymore).
This is explained here:
Using sockets with the development server
You can run and test code
using sockets on the development server, without using any special
command line parameters.
Finally, this question and the accepted answer describe a similar scenario.
Hope this helps you :-)

Exception when trying to create bigquery table via python API

I'm working on an app that will stream events into BQ. Since Streamed Inserts require the table to pre-exist, I'm running the following code to check if the table exists, and then to create it if it doesn't:
TABLE_ID = "data" + single_date.strftime("%Y%m%d")
exists = False;
request = bigquery.tables().list(projectId=PROJECT_ID,
datasetId=DATASET_ID)
response = request.execute()
while response is not None:
for t in response.get('tables', []):
if t['tableReference']['tableId'] == TABLE_ID:
exists = True
break
request = bigquery.tables().list_next(request, response)
if request is None:
break
if not exists:
print("Creating Table " + TABLE_ID)
dataset_ref = {'datasetId': DATASET_ID,
'projectId': PROJECT_ID}
table_ref = {'tableId': TABLE_ID,
'datasetId': DATASET_ID,
'projectId': PROJECT_ID}
schema_ref = SCHEMA
table = {'tableReference': table_ref,
'schema': schema_ref}
table = bigquery.tables().insert(body=table, **dataset_ref).execute(http)
I'm running python 2.7, and have installed the google client API through PIP.
When I try to run the script, I get the following error:
No handlers could be found for logger "oauth2client.util"
Traceback (most recent call last):
File "do_hourly.py", line 158, in <module>
main()
File "do_hourly.py", line 101, in main
body=table, **dataset_ref).execute(http)
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 142, in positional_wrapper
File "/usr/lib/python2.7/site-packages/googleapiclient/http.py", line 721, in execute
resp, content = http.request(str(self.uri), method=str(self.method),
AttributeError: 'module' object has no attribute 'request'
I tried researching the issue, but all I could find was info about confusing between urllib, urllib2 and Python 2.7 / 3.
I'm not quite sure how to continue with this, and will appreciate all help.
Thanks!
Figured out that the issue was in the following line, which I took from another SO thread:
table = bigquery.tables().insert(body=table, **dataset_ref).execute(http)
Once I removed the "http" variable, which doesn't exist in my scope, the exception dissappeared

Error using api.update_status method in tweepy using Oauth2

Here is my code:-
I have double checked all the auth parameters.
import tweepy
CONSUMER_KEY ='#Omitted - you should not publish your actual key'
CONSUMER_SECRET ='#Omitted - you should not publish your actual secret'
ACCESS_KEY='#Omitted - you should not publish your access key'
ACCESS_SECRET = '#Omitted - you should not publish your access secret'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
api.update_status('Tweeting from command line')
Saved the file in home folder as status.py
after running python status.py follwing error comes:-
Traceback (most recent call last):
File "status.py", line 14, in <module>
api.update_status('Tweeting from command line')
File "/usr/local/lib/python2.7/dist-packages/tweepy-1.10-py2.7.egg/tweepy/binder.py", line 185, in _call
return method.execute()
File "/usr/local/lib/python2.7/dist-packages/tweepy-1.10-py2.7.egg/tweepy/binder.py", line 168, in execute
raise TweepError(error_msg, resp)
tweepy.error.TweepError: Could not authenticate with OAuth.
Please, help me out
I received this error under the same conditions - using tweepy, all of my keys/secrete were copy and pasted correctly. The problem was the time on my server. After running ntpdate -b pool.ntp.org I was to use tweepy just fine.
I am able to authenticate using tweepy, I have an extra line in my code though, it might help for you to change your code to this:
import tweepy
from tweepy import OAuthHandler
then proceede with the rest of your code. Also add a line in your code to print out to the shell to show your connect as follows:
print api.me().name
Make sure the line you see above this is right after api = tweepy.API(auth)
Try api.update_status(status='Tweeting from command line'). It helped me.