Database Migration Service for MySQL (GCP): "Error importing data: failed to run mysqldump: import err = , mysqldump error = exit status 2..." - google-cloud-platform

I need some help here.
I'm trying to migrate a large database (500+ GB) using Google's Database Migration job.
I think I did all the settings correctly but I'm keeping getting the same error message:
DUMP_STAGE(FAILED): failed to run mysqldump: import err = ,
mysqldump error = exit status 2, stderr: mysqldump: [Warning] Using a
password on the command line interface can be insecure.
Here's the database replica instance flags I'm using:
log_bin_trust_function_creators on
event_scheduler on
character_set_server utf8
sql_mode STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,
ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
net_read_timeout 7200
net_write_timeout 7200
max_allowed_packet 1073741824
general_log on
read_buffer_size 2147479552
I don't know what to do. I don't know if I'm missing something. Any idea?
I can give more information if needed.

Related

AWS Replication agent problem when launching

I am trying to launch aws replication agent in a CENTOS 8.3 and always returns me an error during the process of replication agent installation ( python3 aws-replication-installer-init.py ......)
The output of the process shows me:
The installation of the AWS Replication Agent has started.
Identifying volumes for replication.
Identified volume for replication: /dev/sdb of size 7 GiB
Identified volume for replication: /dev/sda of size 11 GiB
All volumes for replication were successfully identified.
Downloading the AWS Replication Agent onto the source server... Finished.
Installing the AWS Replication Agent onto the source server...
Error: Failed Installing the AWS Replication Agent
Installation failed.
If i check the aws_replication_agent_installer.log i can see that appears messages like:
make -C /lib/modules/4.18.0-348.2.1.el8_5.x86_64/build M=/tmp/tmp8mdbz3st/AgentDriver modules
.....................
retcode: 0
Build essentials returned with code None
--- Building software
running: 'which zypper'
retcode: 256
running: 'make'
retcode: 0
running: 'chmod 0770 ./aws-replication-driver-commander'
retcode: 0
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 0.
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 1.
............
Cannot insert module. Try 9.
Installation returned with code 2
Installation failed due to unspecified error:
stderr: sh: /var/lib/aws-replication-agent/stopAgent.sh: No such file or directory
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no apt-get in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
rmmod: ERROR: Module aws_replication_driver is not currently loaded
insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not available
rmmod: ERROR: Module aws_replication_driver is not currently loaded
Any issue of the error?
Launching with the command:
mokutil --disable-validation
will allow to change kernel modules (next boot will confirm it introducing password that must be entered afet command mokutil)

Google Cloud Run: Cant connect to Postgress SQL

I am following this tutorial to upload my existing Django project running locally on sqlite to Google Cloud Run / Postgres.
I have the cloud_sql_proxy service running and can sign into Postgres from the command line.
I am at the point of running the command
python manage.py migrate
And I get the error:
django.db.utils.OperationalError: connection to server on socket "/cloudsql/cgps-registration-2:us-central-1:cgps-reg-2-postgre-sql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
The answer to that questions is Yes, the server is running locally and accepting connections because I can log in with the Postgres client:
agerson#agersons-iMac ~ % psql "sslmode=disable dbname=postgres user=postgres hostaddr=127.0.0.1"
Password for user postgres:
psql (14.1, server 13.4)
Type "help" for help.
postgres=>
I double checked the connection string in my .env file and it has the correct UN / P
Is this scoket not getting created somehow in a previous step?
/cloudsql/cgps-registration-2:us-central-1:cgps-reg-2-postgre-sql/.s.PGSQL.5432
It looks like there's a mismatch between what the app is looking for and how you're launching the proxy. The error explains the problem.
You're launching the proxy like this with an incorrect region name (us-central):
cloud_sql_proxy -instances="cgps-registration-2:us-central:cgps-reg-2-postgre-sql=tcp:5432
But the app is looking for us-central1. Try this (omitting the =tcp:5432 to create a Unix socket):
cloud_sql_proxy -instances="cgps-registration-2:us-central1:cgps-reg-2-postgre-sql

Sqoop job fails on AWS with Connection error

I am executing the below sqoop command to get a table from another aws rds instance over to hdfs.
#!/bin/bash
sqoop import \
--connect jdbc:mysql://awsrds.cpclxrkdvwmz.us-east-1.rds.amazonaws.com/financials_data \
--username someuser \
--password somepwd \
--table member_score \
--m 1 \
--target-dir /capstone/member_score
I could connect to this server using the workbench.
But, sqoop fails to get the data.
The stack-trace is as shown below:
[ec2-user#ip-10-0-0-238 capstone]$ ./DataIngestion.txt
Warning: /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/01/03 03:56:45 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.15.1
20/01/03 03:56:45 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
20/01/03 03:56:45 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
20/01/03 03:56:45 INFO tool.CodeGenTool: Beginning code generation
20/01/03 03:58:52 ERROR manager.SqlManager: Error executing statement: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
The stack-trace says connection error. But, I could connect using Mysql Workbench
As sqoop was giving a Connection Error, I tried to ping the server.
[ec2-user#ip-10-0-0-238 capstone]$ ping awsrds.cpclxrkdvwmz.us-east-1.rds.amazonaws.com
PING ec2-3-211-175-82.compute-1.amazonaws.com (3.211.175.82) 56(84) bytes of data.
^C
--- ec2-3-211-175-82.compute-1.amazonaws.com ping statistics ---
2935 packets transmitted, 0 received, 100% packet loss, time 2934021ms
On seeing that server is not reachable, the next step to check was the Security Groups settings on AWS.
The outbound rule should allow All Traffic
I had previously set Outbound to specific IP. Since AWS allocates new IP each time static IP's dont work.

Cannot connect to redis://localhost:6379/0: Error 111 connecting to localhost:6379. Connection refused

I've deployed a minimal django/celery/redis project to heroku, and I'm trying to test it in the python shell:
heroku run python
>>> import tasks
>>> tasks.add.delay(1, 2)
The problem is tasks.add.delay(1,2) doesn't produce any output, it just hangs there whereas in local it gave an Async message.
Also when i try to see the task running in application logs with "heroku logs -t -p worker"
it gives me this error:
ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 111
connecting to localhost:6379. Connection refused..
PS. It works fine on local.
Turns out i was missing the following in my tasks.py file in my application directory.
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
added that and now everything works.

Unable to connect to database: Login failed for user 'sa'

I am using jtds jdbs driver (net.sourceforge.jtds.jdbc.Driver) to connect jasper with SQL Server 2000 SP2 but i always getting error Unable to connect to database: Login failed for user 'sa'. I am using centos7.
I have tried replacing several drivers with different versions but I always get the same error message.
This is my code:
java -jar /var/www/clients/client1/web73/web/vendor/cossou/jasperphp/src/JasperStarter/lib/jasperstarter.jar pr /var/www/clients/client1/web73/web/storage/app/uploads/presensi/report/report1.jrxml -f pdf -t generic -H 192.168.22.2 -u sa --db-driver net.sourceforge.jtds.jdbc.Driver --db-url jdbc:jtds:sqlserver://192.168.22.2:1433;instance=MYINSTANCE;databaseName=MYDB
Unable to connect to database: Login failed for user 'sa'