Connecting to google cloud SQL Error using Flask-SQLAlchemy - flask

I'm totally beginner for use flask and Google Cloud SQL. I want to make Login and Registration in my app using Flask-Login and Flask-SQLAlchemy.
But I can't connect to google cloud SQL using Flask-SQLAlchemy, this is my code.
SQLALCHEMY_DATABASE_URI = (
'mysql+pymysql://{db_user}:{db_password}#/{db_name}'
'?unix_socket=/cloudsql/{connection_name}').format(
user=db_user, password=db_password,
database=db_name, connection_name=connection_name)
app.config['SQLALCHEMY_DATABASE_URI'] = SQLALCHEMY_DATABASE_URI
I just got this error :
(2003, "Can't connect to MySQL server on 'localhost' ([Errno 2] No such file or directory)")
But, when I try to make a connection using SQLAlchemy with this code, the connection is succeeded :
query_string = dict({"unix_socket": "/cloudsql/{}".format(connection_name)})
driver_name = 'mysql+pymysql'
db = sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername=driver_name,
username=db_user,
password=db_password,
database=db_name,
query=query_string),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800)
But when I tried to change the code like this :
app.config['SQLALCHEMY_DATABASE_URI'] = db
then, I got this error: 'Engine' object has no attribute 'drivername'.
Any suggestion for this problem? any advice, might be very helpful for me, Thank you.

This line right here actually assigns the "engine" object to your URI config:
app.config['SQLALCHEMY_DATABASE_URI'] = db
You can use the url object to create a URI instead:
uri = sqlalchemy.engine.url.URL(
drivername=driver_name,
username=db_user,
password=db_password,
database=db_name,
query=query_string)
app.config['SQLALCHEMY_DATABASE_URI'] = uri.render_as_string(hide_password=false)
You can compare the URI to the one you created and see where they differ. It's likely there is a typo, or that some value is escaped incorrectly.

Related

Trouble authenticating and writing to database locally

I'm having trouble authenticating and writing data to a spanner database locally. All imports are up to date - google.cloud, google.auth2, etc. I have tried having someone else run this and it works fine, so the problem seems to be something on my end - something wrong or misconfigured on my computer, maybe where the credentials are stored or something?
Anyone have any ideas?
from google.cloud import spanner
from google.api_core.exceptions import GoogleAPICallError
from google.api_core.datetime_helpers import DatetimeWithNanoseconds
import datetime
from google.oauth2 import service_account
def write_to(database):
record = [[
1041613562310836275,
'test_name'
]]
columns = ("id", "name")
insert_errors = []
try:
with database.batch() as batch:
batch.insert_or_update(
table = "guild",
columns = columns,
values = record,
)
except GoogleAPICallError as e:
print(f'error: {e}')
insert_errors.append(e.message)
pass
return insert_errors
if __name__ == "__main__":
credentials = service_account.Credentials.from_service_account_file(r'path\to\a.json')
instance_id = 'instance-name'
database_id = 'database-name'
spanner_client = spanner.Client(project='project-name', credentials=credentials)
print(f'spanner creds: {spanner_client.credentials}')
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)
insert_errors = write_to(database)
some credential tests:
creds = service_account.Credentials.from_service_account_file(a_json)
<google.oauth2.service_account.Credentials at 0x...>
spanner_client.credentials
<google.auth.credentials.AnonymousCredentials at 0x...>
spanner_client.credentials.signer_email
AttributeError: 'AnonymousCredentials' object has no attribute 'signer_email'
creds.signer_email
'...#....iam.gserviceaccount.com'
spanner.Client().from_service_account_json(a_json).credentials
<google.auth.credentials.AnonymousCredentials object at 0x...>
The most common reason for this is that you have accidentally set (or forgot to unset) the environment variable SPANNER_EMULATOR_HOST. If this environment variable has been set, the client library will try to connect to the emulator instead of Cloud Spanner. This will cause the client library to wait for a long time while trying to connect to the emulator (assuming that the emulator is not running on your machine). Unset the environment variable to fix this problem.
Note: This environment variable will only affect Cloud Spanner client libraries, which is why other Google Cloud product will work on the same machine. The script will also in most cases work on other machines, as they are unlikely to have this environment variable set.

Cannot connect to Cloud SQL using Apache-Beam JDBC

I am trying to connect to Cloud SQL by using Python SDK io.jdbc module, more specifically ReadFromJdbc class, which is documented here- https://beam.apache.org/releases/pydoc/current/apache_beam.io.jdbc.html
Based on it and info on connecting to Cloud MySQL using JDBC here- https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory/blob/main/docs/jdbc-mysql.md I wrote the following code
import apache_beam as beam
import apache_beam.io.jdbc as jdbc
import typing
import apache_beam.coders as coders
from apache_beam.options.pipeline_options import PipelineOptions
pipeline_options = {
'project': 'project-name',
'runner': 'DataflowRunner',
'region': 'europe-central2',
'staging_location':"gs://temp",
'temp_location':"gs://temp",
'template_location':"gs://templates/temp_name"
}
pipeline_options = PipelineOptions.from_dictionary(pipeline_options)
serviceAccount = r'path\to\serviceaccount.json'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = serviceAccount
ExampleRow = typing.NamedTuple('ExampleRow',
[('id', int), ('migration', str)])
coders.registry.register_coder(ExampleRow, coders.RowCoder)
with beam.Pipeline(options=pipeline_options) as p:
res = (
p
| "Read database list" >> jdbc.ReadFromJdbc(
table_name='table',
driver_class_name='com.mysql.jdbc.Driver',
jdbc_url='jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>',
username='user',
password='pass',
query = "select id, migration from db.table;",
fetch_size=1,
classpath=["com.google.cloud.sql:mysql-socket-factory-connector-j-8:1.7.2"],
expansion_service = 'host:6666'
)
| "Print results" >> beam.io.WriteToText(r'gs://output/out.csv')
)
For the expansion service I have set up WLS2 python environment as documented here- https://beam.apache.org/documentation/sdks/java-multi-language-pipelines/#advanced-start-an-expansion-service
Unfortunately, I get this error:
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:6666: WSA Error"
debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: ipv4:127.0.0.1:6666: WSA Error {grpc_status:14, created_time:"2022-12-08T15:43:05.445755053+00:00"}"
I tried to switch expansion_service to a specific IP that I got from wls hostname -I but it produced the same result, even though you can reach it (tested with ping and hosted a webserver).
Am I doing something completely wrong? I find it hard to believe that it's so hard to connect to Cloud SQL, so I must be...
Transforms under apache_beam.io.jdbc module are cross-language transforms implemented in the Beam Java SDK. Hence, during the pipeline construction, Python SDK will connect to a Java expansion service to expand these transforms. You followed the instructions to create a Python expansion service.
I think the easiest thing to do will be to use the default expansion service.
First, install Java runtime in the computer from where the pipeline is constructed and make sure that java command is available.
Use the following transform to read from Cloud SQL,
p | "Read database list" >> jdbc.ReadFromJdbc(
table_name='table',
driver_class_name='com.mysql.jdbc.Driver',
jdbc_url='jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>',
username='user',
password='pass',
query = "select id, migration from db.table;",
fetch_size=1,
classpath=["com.google.cloud.sql:mysql-socket-factory-connector-j-8:1.7.2"]
)

Pytest on Flask based API - test by calling the remote API

New to using Pytest on APIs. From my understanding, testing creates another instance of Flask. Additionally, from the tutorials I have seen, they also suggest to create a separate DB table instance to add, fetch and remove data for test purposes. However, I simply plan to use the remote api URL as host to simply make the call.
Now, I set my conftest like this, where the flag --testenv would indicate to make the get/post call on the host listed below:
import pytest
import subprocess
def pytest_addoption(parser):
"""Add option to pass --testenv=api_server to pytest cli command"""
parser.addoption(
"--testenv", action="store", default="exodemo", help="my option: type1 or type2"
)
#pytest.fixture(scope="module")
def testenv(request):
return request.config.getoption("--testenv")
#pytest.fixture(scope="module")
def testurl(testenv):
if testenv == 'api_server':
return 'http://api_url:5000/'
else:
return 'http://locahost:5000'
And my test file is written like this:
import pytest
from app import app
from flask import request
def test_nodes(app):
t_client = app.test_client()
truth = [
{
*body*
}
]
res = t_client.get('/topology/nodes')
print (res)
assert res.status_code == 200
assert truth == json.loads(res.get_data)
I run the code using this:
python3 -m pytest --testenv api_server
The thing I expect is that the test file would simply make a call to the remote api with the creds, fetch the data regardless of how it gets pulled in the remote code, and bring it here for assertion. However, I am getting the 400 BAD REQUEST error, with the error being like this:
assert 400 == 200
E + where 400 = <WrapperTestResponse streamed [400 BAD REQUEST]>.status_code
single_test.py:97: AssertionError
--------------------- Captured stdout call ----------------------
{"timestamp": "2022-07-28 22:11:14,032", "level": "ERROR", "func": "connect_to_mysql_db", "line": 23, "message": "Error connecting to the mysql database (2003, \"Can't connect to MySQL server on 'mysql' ([Errno -3] Temporary failure in name resolution)\")"}
<WrapperTestResponse streamed [400 BAD REQUEST]>
Does this mean that the test file is still trying to lookup the database locally for fetching? I am unable to figure out on which host are they sending the test url as well, so I am kind of stuck here. Looking to get some help around here.
Thanks.

Dialogflow: Agent metadata not found for agentId

I'm trying to use Dialogflow's detect_intent in Python and I keep getting:
404 com.google.apps.framework.request.NotFoundException: Agent metadata not found for agentId: ####-####-####-####-####
Here's a snippet of my code:
import google.cloud.dialogflow as dialogflow
from CONFIG import DIALOGFLOW_PROJECT_ID
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = 'credentials/dialogflow.json'
def predict_intent(text, language):
session_client = dialogflow.SessionsClient()
session = session_client.session_path(DIALOGFLOW_PROJECT_ID, SESSION_ID)
text_input = dialogflow.TextInput(text=text, language_code=language)
query_input = dialogflow.QueryInput(text=text_input)
response = session_client.detect_intent(session=session, query_input=query_input) # ERROR
return response.query_result.intent.display_name
I tried running the function multiple times and some of them succeed, but most fall in the exception.
I can train the bot using the same interface and it works fine.
I'm using Python 3.7 and the following Google Cloud modules: google-api-core==2.0.1, google-auth==2.0.2, google-cloud-dialogflow==2.7.1, googleapis-common-protos==1.53.0.

Deployment of Django application using MongoDB on AWS

How to define the settings of the django application for using the mongodb server running on the same instance as that of the django project. I tried it with 127.0.0.1, port 27017 (which I assume is the default port at which the mongodb server runs), in the settings of the django application. I then tried it with the IP address of the aws instance, but with no luck. It always gives me this error:
ConnectionError: You have not defined a default connection
My django project has the following mongo settings.
MONGO_SETTINGS = {
'DB_NAME' : 'spotmentor',
'HOST' : '127.0.0.1',
'PORT' : 27017,
'USERNAME' : '',
'PASSWORD' : ''
}
Then I used the mongoengine connect to establish the connection.
I am importing the above MONGO_SETTINGS as mongoset and
from mongoengine import connect
connect(mongoset.get('DB_NAME'), host = mongoset.get('HOST'), port = mongoset.get('PORT'), username = mongoset.get('USERNAME'), password = mongoset.get('PASSWORD'))
I changed the value of the HOST key to the aws instance public IP and still I got the same ConnectionError.
I have also defined:
DATABASES = {
'default' : {
'ENGINE': 'django.db.backends.dummy',
}
}
How can I resolve this?
mongoengine does not require any extra settings to connect to mongodb. The settings that you have provided must suffice.
I suggest you re-check your installation of mongodb.
Try sudo apt-get remove mongodb
and sudo apt-get install mongodb
This should solve your problem.
Also, you need not define dummy db backend if you are not using sql-databases.