Local standard App Engine/Py fails auth with remote datastore [duplicate] - python-2.7

I recently updated my gcloud libraries from 118.0.0 to 132.0.0 and immediately remote_api_shell no longer worked. I went through a number of permutations of re-logging in, to set the application-default credentials through gcloud, and to use a service account and environment variable. All permutations failed with the same error message:
Traceback (most recent call last):
File "/Users/mbostwick/google-cloud-sdk/bin/remote_api_shell.py", line 133, in <module>
run_file(__file__, globals())
File "/Users/mbostwick/google-cloud-sdk/bin/remote_api_shell.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/tools/remote_api_shell.py", line 160, in <module>
main(sys.argv)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/tools/remote_api_shell.py", line 156, in main
oauth2=True)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell
secure=secure, app_id=appid)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 769, in ConfigureRemoteApiForOAuth
rpc_server_factory=rpc_server_factory)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 839, in ConfigureRemoteApi
app_id = GetRemoteAppIdFromServer(server, path, rtok)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 569, in GetRemoteAppIdFromServer
response = server.Send(path, payload=None, **urlargs)
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/tools/appengine_rpc_httplib2.py", line 259, in Send
NeedAuth()
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/tools/appengine_rpc_httplib2.py", line 235, in NeedAuth
RaiseHttpError(url, response_info, response, 'Too many auth attempts.')
File "/Users/mbostwick/google-cloud-sdk/platform/google_appengine/google/appengine/tools/appengine_rpc_httplib2.py", line 85, in RaiseHttpError
raise urllib2.HTTPError(url, response_info.status, msg, response_info, stream)
urllib2.HTTPError: HTTP Error 401: Unauthorized Too many auth attempts.
After back revving through 131.0.0 and 130.0.0, I just went back to 118.0.0, re-logged in and everything worked fine.
I did not update the running application after updating gcloud, as I'm in the middle of a release cycle at the moment, so that may have been the issue, but any help would be appreciated. Thanks!

TL;DR: This was fixed in gcloud version 134
Original answer: Run
gcloud auth application-default login --scopes=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email
Now your remote shell should work again.
Details:
I think this was broken by the 128.0.0 update, along with the changes to the gcloud auth login command. The old tokens have the following scopes (according to Google's tokeninfo endpoint):
https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/plus.me
The new tokens from gcloud auth application-default login without any options only have:
https://www.googleapis.com/auth/cloud-platform
This is documented in gcloud auth application-default login --help
Version 134 details: The scopes requested are now:
https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/cloud-platform
See discussion at https://groups.google.com/d/msg/google-appengine/ptc-76K6Kk4/9qr4601BBgAJ

Related

GCP Removed Access To Google Collab User

I used to be able to use secretmanager to get secrets from my GCP account using google collab.
Now, whenever I try to run the following code:
client = secretmanager.SecretManagerServiceClient()
name = f"projects/my_project_here/secrets/my_secret_name_here/versions/latest"
response = client.access_secret_version(request={"name": name})
I get the following error over and over:
ERROR:grpc._plugin_wrapping:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x7fb313b41850>" raised exception!
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/google/auth/compute_engine/credentials.py", line 111, in refresh
self._retrieve_info(request)
File "/usr/local/lib/python3.7/dist-packages/google/auth/compute_engine/credentials.py", line 88, in _retrieve_info
request, service_account=self._service_account_email
File "/usr/local/lib/python3.7/dist-packages/google/auth/compute_engine/_metadata.py", line 234, in get_service_account_info
return get(request, path, params={"recursive": "true"})
File "/usr/local/lib/python3.7/dist-packages/google/auth/compute_engine/_metadata.py", line 187, in get
response,
google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x7fb313b9c290>)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/grpc/_plugin_wrapping.py", line 90, in __call__
context, _AuthMetadataPluginCallback(callback_state, callback))
File "/usr/local/lib/python3.7/dist-packages/google/auth/transport/grpc.py", line 101, in __call__
callback(self._get_authorization_headers(context), None)
File "/usr/local/lib/python3.7/dist-packages/google/auth/transport/grpc.py", line 88, in _get_authorization_headers
self._request, context.method_name, context.service_url, headers
File "/usr/local/lib/python3.7/dist-packages/google/auth/credentials.py", line 133, in before_request
self.refresh(request)
File "/usr/local/lib/python3.7/dist-packages/google/auth/compute_engine/credentials.py", line 117, in refresh
six.raise_from(new_exc, caught_exc)
File "<string>", line 3, in raise_from
google.auth.exceptions.RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\nb''", <google.auth.transport.requests._Response object at 0x7fb313b9c290>)
How can I see what user I am in google collab and then add this user in GCP so I can again fetch these secrets?
I know to go to GCP->Secret Manager->My_Secret->Permissions->+Grant Access, but I don't know how to know 1) Who to Add 2) Why this permission changed on its own with no intervention on anyone's end.
Both were originally running under my email (and they are still) so this worked without me ever having to touch secrets access because both were the App Engine default service account<->Secret Manager Secret Accessor

Setting up a scheduled query with a service account error PermissionDenied: 403 The caller does not have permission in data_transfer_service

I am trying to schedule query using big query data transfer api and giving required permission bigquery.admin and enabled the big query transfer api.
Permission Documentation:
https://cloud.google.com/bigquery-transfer/docs/enable-transfer-service
Also tried with project owner permission to the service account. But still giving same error.
Code Documentation: (Setting up a scheduled query with a service account)
https://cloud.google.com/bigquery/docs/scheduling-queries
Part in which error coming
transfer_config = transfer_client.create_transfer_config(
bigquery_datatransfer.CreateTransferConfigRequest(
parent=parent,
transfer_config=transfer_config,
service_account_name=service_account_name,
)
)
Error StackTrace
Traceback (most recent call last):
File "/home/ubuntu/prod/venv_trellai/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 73, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/ubuntu/prod/venv_trellai/lib/python3.6/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/ubuntu/prod/venv_trellai/lib/python3.6/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "The caller does not have permission"
debug_error_string = "{"created":"#1633536014.842657676","description":"Error received from peer ipv4:142.250.192.138:443","file":"src/core/lib/surface/call.cc","file_line":1070,"grpc_message":"The caller does not have permission","grpc_status":7}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "__main__.py", line 728, in <module>
mbc.schedule_query()
File "/home/ubuntu/prod/trell-ds-framework/data_engineering/data_migration/schedule_quries.py", line 62, in schedule_query
service_account_name=service_account_name,
File "/home/ubuntu/prod/venv_trellai/lib/python3.6/site-packages/google/cloud/bigquery_datatransfer_v1/services/data_transfer_service/client.py", line 647, in create_transfer_config
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/home/ubuntu/prod/venv_trellai/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/home/ubuntu/prod/venv_trellai/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 75, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 The caller does not have permission
Service file have all these credentials below.
BigQuery Admin
BigQuery Data Transfer Service Agent
Service Account Token Creator
Storage Admin
I am already setting up json authentication cred in environment variable but still gives permission error.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = Constants.BIG_QUERY_SERVICE_ACCOUNT_CRED
Can anyone help me out here? Thanks in advance.
Take a look at this page on authentication: https://cloud.google.com/bigquery/docs/authentication/service-account-file#python
Assuming you're using Service Account, you can provide the credentials explicitly to confirm they work as expected:
from google.cloud import bigquery
from google.oauth2 import service_account
# TODO(developer): Set key_path to the path to the service account key
# file.
# key_path = "path/to/service_account.json"
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
client = bigquery.Client(credentials=credentials, project=credentials.project_id,)
I would recommend you to see if the service account you are using is referring to the project you are using and has all the permissions needed to schedule the query. My best guess is that you are pointing to another project with the service account.
Also, you need one extra role for the service account that is the next one “Service Account Token Creator”.

AWS polly sample example in python?

First Time I am trying AWS services. I have to integrate AWS polly with asterisk for text to speech.
here is example code i written to convert text to speech
from boto3 import client
import boto3
import StringIO
from contextlib import closing
polly = client("polly", 'us-east-1' )
response = polly.synthesize_speech(
Text="Good Morning. My Name is Rajesh. I am Testing Polly AWS Service For Voice Application.",
OutputFormat="mp3",
VoiceId="Raveena")
print(response)
if "AudioStream" in response:
with closing(response["AudioStream"]) as stream:
data = stream.read()
fo = open("pollytest.mp3", "w+")
fo.write( data )
fo.close()
I am getting following error.
Traceback (most recent call last):
File "pollytest.py", line 11, in <module>
VoiceId="Raveena")
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 530, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 166, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 150, in create_request
operation_name=operation_model.name)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 147, in sign
auth.add_auth(request)
File "/usr/local/lib/python2.7/dist-packages/botocore/auth.py", line 316, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I want to provide credentials directly in this script so that i can use this in asterisk system application.
UPDATE:
created a file ~/.aws/credentials with below content
[default]
aws_access_key_id=XXXXXXXX
aws_secret_access_key=YYYYYYYYYYY
now for my current login user its working fine, but for asterisk PBX it is not working.
Your code runs perfectly fine for me!
The last line is saying:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
So, it is unable to authenticate against AWS.
If you are running this code on an Amazon EC2 instance, the simplest method is to assign an IAM Role to the instance when it is launched (it can't be added later). This will automatically assign credentials that can be used by application running on the instance -- no code changes required.
Alternatively, you could obtain an Access Key and Secret Key from IAM for your IAM User and store those credentials in a local file via the aws configure command.
It is bad practice to put credentials in source code, since they may become compromised.
See:
IAM Roles for Amazon EC2
Best Practices for Managing AWS Access Keys
Please note,asterisk pbx usually run under asterisk user.
So you have put authentification for that user, not root.

spark spark-ec2 credentials using aws_security_token

I would like to ask if it is currently possible to use spark-ec2 script https://spark.apache.org/docs/latest/ec2-scripts.html together with credentials that are consisting not only from: aws_access_key_id and aws_secret_access_key, but it also contains aws_security_token.
When I try to run the script I am getting following error message:
ERROR:boto:Caught exception reading instance data
Traceback (most recent call last):
File "/Users/zikes/opensource/spark/ec2/lib/boto-2.34.0/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 64] Host is down>
ERROR:boto:Unable to read instance data, giving up
No handler was ready to authenticate. 1 handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
Does anyone has some idea what can be possibly wrong? Is aws_security_token the problem?
It maybe seems to me more as boto than Spark problem.
I have tried both:
1) setting credentials in ~/.aws/credentials and ~/.aws/config
2) setting credential by commands:
export aws_access_key_id=<my_aws_access_key>
export aws_secret_access_key=<my_aws_seecret_key>
export aws_security_token=<my_aws_security_token>
My launch command is:
./spark-ec2 -k my_key -i my_key.pem --additional-tags "mytag:tag1,mytag2:tag2" --instance-profile-name "profile1" -s 1 launch test
you can setup your credentials & config using the command aws configure.
I had the same issue but in my case my AWS_SECRET_ACCESS_KEY had a slash, I regenerated the key until there was no slash and it worked
The problem was that I did not use profile called default after renaming everything worked well.

fabric deploy problem

I'm trying to deploy a django app with fabric and get the following
error:
Alexs-MacBook:fabric alex$ fab config:instance=peergw deploy -H <ip> -
u <username> -p <password>
[192.168.2.93] run: cat /etc/issue
Traceback (most recent call last):
File "build/bdist.macosx-10.6-universal/egg/fabric/main.py", line
419, in main
File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/
commands.py", line 37, in deploy
checkup()
File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/
commands.py", line 140, in checkup
if not 'Ubuntu' in run('cat /etc/issue'):
File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line
382, in host_prompting_wrapper
File "build/bdist.macosx-10.6-universal/egg/fabric/operations.py",
line 414, in run
File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line
65, in __getitem__
File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line
140, in connect
File "build/bdist.macosx-10.6-universal/egg/paramiko/client.py",
line 149, in load_system_host_keys
File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py",
line 154, in load
File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py",
line 66, in from_line
File "build/bdist.macosx-10.6-universal/egg/paramiko/rsakey.py",
line 61, in __init__
paramiko.SSHException: Invalid key
Alexs-MacBook:fabric alex$
I can't connect to the server via ssh. What can be my problem?
Regards, Arshavski Alexander.
Going out on a limb here, I'd say your SSH key is incorrect:
paramiko.SSHException: Invalid key
What does your servers say when you try to ssh into it, using the username and password you were providing to fabric?
On second thought: as you are providing fabric with a password, that might suggest your SSH host key has changed and / or has not yet been added to ~/.ssh/known_hosts.
Yeah, I'd say that the host key on the machine you're connecting to has changed. (Or you are connecting from a machine that never went through the "xxx is an unknown host, do you want to add it to the list of known hosts?" dialogue.)
If you are not concerned about man-in-the-middle attacks or have changed the key yourself a few days ago add the following line somewhere in your env.variables:
env.disable_known_hosts = True
that should take care of it!