I have dockerized my existing Django Rest project which uses MySQL database.
My dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
And my docker-compose.yml file:
version: '3'
services:
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
depends_on:
- db
ports:
- "8000:8000"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: libraries
MYSQL_USER: root
MYSQL_PASSWORD: root
My commands docker-compose buildand docker-compose up are successful and output of later is:
D:\Development\personal_projects\library_backend>docker-compose up
Starting librarybackend_db_1 ... done
Starting librarybackend_web_1 ... done
Attaching to librarybackend_db_1, librarybackend_web_1
db_1 | 2018-02-13T10:11:48.044358Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
db_1 | 2018-02-13T10:11:48.045250Z 0 [Note] mysqld (mysqld 5.7.20) starting as process 1 ...
db_1 | 2018-02-13T10:11:48.047697Z 0 [Note] InnoDB: PUNCH HOLE support available
db_1 | 2018-02-13T10:11:48.047857Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
db_1 | 2018-02-13T10:11:48.048076Z 0 [Note] InnoDB: Uses event mutexes
db_1 | 2018-02-13T10:11:48.048193Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
db_1 | 2018-02-13T10:11:48.048297Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
db_1 | 2018-02-13T10:11:48.048639Z 0 [Note] InnoDB: Using Linux native AIO
db_1 | 2018-02-13T10:11:48.048928Z 0 [Note] InnoDB: Number of pools: 1
db_1 | 2018-02-13T10:11:48.049119Z 0 [Note] InnoDB: Using CPU crc32 instructions
db_1 | 2018-02-13T10:11:48.050256Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
db_1 | 2018-02-13T10:11:48.056054Z 0 [Note] InnoDB: Completed initialization of buffer pool
db_1 | 2018-02-13T10:11:48.058064Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
db_1 | 2018-02-13T10:11:48.069243Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
db_1 | 2018-02-13T10:11:48.081867Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
db_1 | 2018-02-13T10:11:48.082237Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
db_1 | 2018-02-13T10:11:48.096687Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
db_1 | 2018-02-13T10:11:48.097392Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
db_1 | 2018-02-13T10:11:48.097433Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
db_1 | 2018-02-13T10:11:48.097666Z 0 [Note] InnoDB: Waiting for purge to start
db_1 | 2018-02-13T10:11:48.147792Z 0 [Note] InnoDB: 5.7.20 started; log sequence number 13453508
db_1 | 2018-02-13T10:11:48.148222Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
db_1 | 2018-02-13T10:11:48.148657Z 0 [Note] Plugin 'FEDERATED' is disabled.
db_1 | 2018-02-13T10:11:48.151181Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180213 10:11:48
db_1 | 2018-02-13T10:11:48.152154Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
db_1 | 2018-02-13T10:11:48.152545Z 0 [Warning] CA certificate ca.pem is self signed.
db_1 | 2018-02-13T10:11:48.153982Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
db_1 | 2018-02-13T10:11:48.154147Z 0 [Note] IPv6 is available.
db_1 | 2018-02-13T10:11:48.154261Z 0 [Note] - '::' resolves to '::';
db_1 | 2018-02-13T10:11:48.154373Z 0 [Note] Server socket created on IP: '::'.
db_1 | 2018-02-13T10:11:48.160505Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.160745Z 0 [Warning] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.160859Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.161025Z 0 [Warning] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.161147Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.161266Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.168523Z 0 [Warning] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.168734Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.172735Z 0 [Note] Event Scheduler: Loaded 0 events
db_1 | 2018-02-13T10:11:48.173195Z 0 [Note] mysqld: ready for connections.
db_1 | Version: '5.7.20' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
db_1 | 2018-02-13T10:11:48.173365Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
db_1 | 2018-02-13T10:11:48.173467Z 0 [Note] Beginning of list of non-natively partitioned tables
db_1 | 2018-02-13T10:11:48.180866Z 0 [Note] End of list of non-natively partitioned tables
web_1 | Operations to perform:
web_1 | Apply all migrations: account, admin, auth, authtoken, contenttypes, libraries, sessions, sites
web_1 | Running migrations:
web_1 | No migrations to apply.
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | February 13, 2018 - 10:11:50
web_1 | Django version 1.10.3, using settings 'config.settings'
web_1 | Starting development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
I can now access my app by hitting localhost:8000. However, as it creates a fresh database instance in the container, I do not know how can I create a superuser there and login to my admin interface. Normally without docker, I run the command python manage.py createsuperuser which starts interactive command prompt to enter credentials of admin user.
How should I handle this?
If I have an existing database with data in it, how can I use it (to populate tables) in this database in container?
I could create a superuser by simply running a command: docker-compose run web python manage.py createsuperuser which opened interactive command prompt to enter admin credentials and subsequently I could log in through my admin interface
Related
I use the recommended Redash docker image , docker-compose tested locally, all services are up & running. (server, scheduler, worker, redis, postgres) http://localhost:5000/setup is up and running
% docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`
Creating network "redash_default" with the default driver
Creating redash_postgres_1 ... done
Creating redash_redis_1 ... done
Creating redash_server_1 ... done
Creating redash_scheduler_1 ... done
Creating redash_adhoc_worker_1 ... done
Creating redash_scheduled_worker_1 ... done
Creating redash_nginx_1 ... done
% docker-compose run --rm server create_db
Creating redash_server_run ... done
[2021-10-29 23:53:52,904][PID:1][INFO][alembic.runtime.migration] Context impl PostgresqlImpl.
[2021-10-29 23:53:52,905][PID:1][INFO][alembic.runtime.migration] Will assume transactional DDL.
[2021-10-29 23:53:52,933][PID:1][INFO][alembic.runtime.migration] Running stamp_revision -> 89bc7873a3e0
I build the image from this version & push to ECR, configured Fargate Task Definition to run from this image. In task definition I mapped ports (6379, 5432, 5000, 80 any possible ports). The task is showing worker timed out.
2021-10-29 18:31:08[2021-10-29 23:31:08,742][PID:6104][DEBUG][redash.query_runner] Registering Vertica (vertica) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08,744][PID:6104][DEBUG][redash.query_runner] Registering ClickHouse (clickhouse) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6104)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6105)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6106] [INFO] Worker exiting (pid: 6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6105] [INFO] Worker exiting (pid: 6105)
I manually added two environmental variables REDASH_REDIS_URL & REDASH_DATABASE_URL. They are not working either. My EC2 can access RDS, so it's not the db problem.
ubuntu#ip-10-8-0-191:~$ psql -h p-d-mt-a-redash-rds-instance-1.c5clgmj5xaif.us-west-2.rds.amazonaws.com -p 5432 -U postgresPassword for user postgres:
psql (10.18 (Ubuntu 10.18-0ubuntu0.18.04.1), server 13.4)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
mytable | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rdsadmin | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin
template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin +
| | | | | rdsadmin=CTc/rdsadmin
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
postgres=>
Does anyone know how to make the "WORKER" working? (not timed out, exit) Does it need a lot of configuration in order to make Redash launched by Fargate? BTW, I created Redis & RDS in AWS. They are all the same VPC, security group inbound configured too.
Im trying to setup PXC with 3 nodes, 1st node was bootstrapped succesfully but when trying to start second node it cant get SST.
Logfile from 2nd node:
2020-10-07T13:17:16.480905Z 0 [Note] [MY-000000] [WSREP] Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 --role 'joiner' --address 'xxx.xxx.xxx.xxx:4444' --datadir '/var/lib/mysql/' --basedir '/usr/' --plugindir '/usr/lib/mysql/plugin/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '154359' --mysqld-version '8.0.20-11.1' --binlog 'mysql-bin' )
2020-10-07T13:17:17.203042Z 0 [Warning] [MY-000000] [WSREP-SST] Found a stale sst_in_progress file: /var/lib/mysql//sst_in_progress
2020-10-07T13:17:17.642030Z 1 [Note] [MY-000000] [WSREP] Prepared SST request: xtrabackup-v2|xxx.xxx.xxx.xxx:4444/xtrabackup_sst//1
2020-10-07T13:17:17.642609Z 1 [Note] [MY-000000] [Galera] Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 10), state transfer needed: yes
2020-10-07T13:17:17.643130Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2020-10-07T13:17:17.643718Z 1 [Note] [MY-000000] [Galera] ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: 5
2020-10-07T13:17:17.644031Z 1 [Note] [MY-000000] [Galera] Check if state gap can be serviced using IST
2020-10-07T13:17:17.644344Z 1 [Note] [MY-000000] [Galera] Local UUID: 00000000-0000-0000-0000-000000000000 != Group UUID: 613f9455-07f0-11eb-9e01-139f2b6b4973
2020-10-07T13:17:17.644667Z 1 [Note] [MY-000000] [Galera] ####### IST uuid:00000000-0000-0000-0000-000000000000 f: 0, l: 57, STRv: 3
2020-10-07T13:17:17.645190Z 1 [Note] [MY-000000] [Galera] IST receiver addr using ssl://xxx.xxx.xxx.xxx:4568
2020-10-07T13:17:17.645589Z 1 [Note] [MY-000000] [Galera] IST receiver using ssl
2020-10-07T13:17:17.646458Z 1 [Note] [MY-000000] [Galera] Prepared IST receiver for 0-57, listening at: ssl://xxx.xxx.xxx.xxx:4568
2020-10-07T13:17:17.647629Z 0 [Warning] [MY-000000] [Galera] Member 1.0 (engine2) requested state transfer from 'engine3', but it is impossible to select State Transfer donor: Resource temporarily unavailable
2020-10-07T13:17:17.648009Z 1 [Note] [MY-000000] [Galera] Requesting state transfer failed: -11(Resource temporarily unavailable). Will keep retrying every 1 second(s)
2020-10-07T13:17:18.651866Z 0 [Warning] [MY-000000] [Galera] Member 1.0 (engine2) requested state transfer from 'engine3', but it is impossible to select State Transfer donor: Resource temporarily unavailable
2020-10-07T13:17:18.969089Z 0 [Note] [MY-000000] [Galera] (6b19a1b7, 'ssl://0.0.0.0:4567') turning message relay requesting off
2020-10-07T13:17:19.654067Z 0 [Warning] [MY-000000] [Galera] Member 1.0 (engine2) requested state transfer from 'engine3', but it is impossible to select State Transfer donor: Resource temporarily unavailable
2020-10-07T13:18:57.356706Z 0 [Note] [MY-000000] [WSREP-SST] pigz: skipping: <stdin> empty
2020-10-07T13:18:57.360327Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************* FATAL ERROR **********************
2020-10-07T13:18:57.363359Z 0 [ERROR] [MY-000000] [WSREP-SST] Possible timeout in receving first data from donor in gtid/keyring stage
2020-10-07T13:18:57.363393Z 0 [ERROR] [MY-000000] [WSREP-SST] Line 1108
2020-10-07T13:18:57.363412Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************************************************
2020-10-07T13:18:57.363430Z 0 [ERROR] [MY-000000] [WSREP-SST] Cleanup after exit with status:32
2020-10-07T13:18:57.384013Z 0 [ERROR] [MY-000000] [WSREP] Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address 'xxx.xxx.xxx.xxx:4444' --datadir '/var/lib/mysql/' --basedir '/usr/' --plugindir '/usr/lib/mysql/plugin/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '154359' --mysqld-version '8.0.20-11.1' --binlog 'mysql-bin' : 32 (Broken pipe)
2020-10-07T13:18:57.384488Z 0 [ERROR] [MY-000000] [WSREP] Failed to read uuid:seqno from joiner script.
2020-10-07T13:18:57.384541Z 0 [ERROR] [MY-000000] [WSREP] SST script aborted with error 32 (Broken pipe)
2020-10-07T13:18:57.385257Z 3 [Note] [MY-000000] [Galera] Processing SST received
2020-10-07T13:18:57.385338Z 3 [Note] [MY-000000] [Galera] SST request was cancelled
2020-10-07T13:18:57.385387Z 3 [ERROR] [MY-000000] [Galera] State transfer request failed unrecoverably: 32 (Broken pipe). Most likely it is due to inability to communicate with the cluster primary component. Restart required.
2020-10-07T13:18:57.385421Z 3 [Note] [MY-000000] [Galera] ReplicatorSMM::abort()
2020-10-07T13:18:57.385453Z 3 [Note] [MY-000000] [Galera] Closing send monitor...
2020-10-07T13:18:57.385484Z 3 [Note] [MY-000000] [Galera] Closed send monitor.
2020-10-07T13:18:57.385519Z 3 [Note] [MY-000000] [Galera] gcomm: terminating thread
2020-10-07T13:18:57.385733Z 3 [Note] [MY-000000] [Galera] gcomm: joining thread
2020-10-07T13:18:57.385762Z 3 [Note] [MY-000000] [Galera] gcomm: closing backend
2020-10-07T13:18:57.945476Z 1 [ERROR] [MY-000000] [Galera] Requesting state transfer failed: -77(File descriptor in bad state)
2020-10-07T13:18:57.945563Z 1 [ERROR] [MY-000000] [Galera] State transfer request failed unrecoverably: 77 (File descriptor in bad state). Most likely it is due to inability to communicate with the cluster primary component. Restart required.
A couple things could be causing this. 1) Make sure ports 4444, 4567, and 4568 are open between all nodes. 2) Make sure you have copied the SSL certificates from node1 over to node 2 BEFORE starting node2. Please read the PXC documentation on setting up new nodes.
I am trying to install pcf dev on local machine which have window 10 using below link.
https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
During installation I am getting below error in deplo-pass.log
Task 576 | 10:31:39 | Preparing deployment: Preparing deployment
(00:01:46) Task 576 | 10:34:11 | Preparing package compilation:
Finding packages to compile (00:00:01) Task 576 | 10:34:12 | Creating
missing vms: database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) Task
576 | 10:34:12 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) Task 576 | 10:34:12
| Creating missing vms: control/29cf5702-030d-4cac-ac9d-5e4221562e3a
(0) Task 576 | 10:34:12 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) Task 576 | 10:34:12 |
Creating missing vms: router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0)
Task 576 | 10:34:46 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (00:00:34) Task 576
| 10:34:48 | Creating missing vms:
router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0) (00:00:36) Task 576 |
10:34:48 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) (00:00:36) Task 576 |
10:34:49 | Creating missing vms:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (00:00:37) Task 576
| 10:34:57 | Creating missing vms:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (00:00:45) Task 576 |
10:34:57 | Updating instance database:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (canary) (00:06:47)
Task 576 | 10:41:44 | Updating instance blobstore:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (canary) (00:01:03)
Task 576 | 10:42:47 | Updating instance control:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (canary) (01:22:36)
L Error: 'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub Task 576 | 12:05:25 | Error:
'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub
how to reviews logs of failing jobs? Is any ways to see logs failed jobs: routing-api, cloud_controller_clock, credhub ?
You need to install the bosh cli first. https://bosh.io/docs/cli-v2-install/
Make sure bosh is installed:
my-mac: bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded
Set the environment variables for bosh to connect to pcf-dev:
my-mac: cf dev bosh
Usage: eval $(cf dev bosh env)
my-mac: eval $(cf dev bosh env)
Ask bosh to show the name of your cf deployment, in this case cf-66ade9481d314315358c is the name:
my-mac: bosh deployments
Using environment '10.144.0.2' as client 'ops_manager'
Name Release(s) Stemcell(s) Team(s)
cf-66ade9481d314315358c binary-buildpack/1.0.30 bosh-warden-boshlite-ubuntu-xenial-go_agent/170.30 -
bosh-dns/1.10.0
bosh-dns-aliases/0.0.3
bpm/1.0.3
capi/1.71.4
cf-cli/1.9.0
cf-networking/2.18.2
cf-syslog-drain/8.1
cflinuxfs2/1.267.0
cflinuxfs3/0.62.0
consul/198
consul-drain/0.0.3
credhub/2.1.2
diego/2.22.1
dotnet-core-buildpack/2.2.5
garden-runc/1.18.0
go-buildpack/1.8.33
java-offline-buildpack/4.16.1
log-cache/2.0.1
loggregator/103.4
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.43
php-buildpack/4.3.70
push-apps-manager-release/667.0.6
pxc/0.14.2
python-buildpack/1.6.28
routing/0.184.0
ruby-buildpack/1.7.31
silk/2.18.1
staticfile-buildpack/1.4.39
statsd-injector/1.5.0
uaa/66.0
1 deployments
Succeeded
Retrieve te logs with bosh using the name from the name column:
my-mac: bosh --deployment cf-66ade9481d314315358c logs
Using environment '10.144.0.2' as client 'ops_manager'
Using deployment 'cf-66ade9481d314315358c'
Task 784
Task 784 | 17:54:41 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files
Task 784 | 17:54:42 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:43 | Fetching group of logs: Packing log files together
Task 784 Started Sat May 9 17:54:41 UTC 2020
Task 784 Finished Sat May 9 17:54:43 UTC 2020
Task 784 Duration 00:00:02
Task 784 done
Downloading resource 'f7d8c6d3-43f8-419a-a436-53a38155af47' to '/Users/my-mac/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-195443-771607.tgz'...
0.00%
Succeeded
Unpack your downloaded log archive.
I've configured SQL second gen. instance and App Engine application (Python 2.7) in one project. I've made necessary settings according to that page.
app.yaml
runtime: python27
api_version: 1
threadsafe: true
env_variables:
CLOUDSQL_CONNECTION_NAME: coral-heuristic-215610:us-central1:db-basic-1
CLOUDSQL_USER: root
CLOUDSQL_PASSWORD: xxxxxxxxx
beta_settings:
cloud_sql_instances: coral-heuristic-215610:us-central1:db-basic-1
libraries:
- name: lxml
version: latest
- name: MySQLdb
version: latest
handlers:
- url: /main
script: main.app
Now as I try to connect from the app (inside Cloud Shell), the error:
OperationalError: (2002, 'Can\'t connect to local MySQL server through socket \'/var/run/mysqld/mysqld.sock\' (2 "No such file or directory")')
Direct connection works:
$ gcloud sql connect db-basic-1 --user=root
was successful...
MySQL [correction_dict]> SHOW PROCESSLIST;
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| 9 | root | localhost | NULL | Sleep | 4 | | NULL |
| 10 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112306 | root | 35.204.173.246:59210 | correction_dict | Query | 0 | starting | SHOW PROCESSLIST |
| 112357 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112368 | root | localhost | NULL | Sleep | 0 | | NULL |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
I've authorized IP to connect to the Cloud SQL instance:
Any hints, help?
Google AppEngine Standard provides a unix socket at /cloudsql/[INSTANCE_CONNECTION_NAME] that automatically connects you to your CloudSQL instance. All you need to do is connect to it at that address. For the MySQLDb library, that looks like this:
db = MySQLdb.connect(
unix_socket=cloudsql_unix_socket,
user=CLOUDSQL_USER,
passwd=CLOUDSQL_PASSWORD)
(If you are running AppEngine Flexible, connecting is different and can be found here)
I've been struggling with deploying an app on Dokku since yesterday. I've been able to deploy two others on the same PaaS platform but for some reason, this one seems to be giving issues.
Right now, I can't even make sense of these logs.
11:30:52 rake.1 | started with pid 12
11:30:52 console.1 | started with pid 14
11:30:52 web.1 | started with pid 16
11:30:52 worker.1 | started with pid 18
11:31:30 worker.1 | [Worker(host:134474ed9b8c pid:18)] Starting job worker
11:31:30 worker.1 | 2015-09-21T11:31:30+0000:[Worker(host:134474ed9b8c pid:18)] Starting job worker
11:31:31 worker.1 | Delayed::Backend::ActiveRecord::Job Load (9.8ms) UPDATE "delayed_jobs" SET locked_at = '2015-09-21 11:31:31.090080', locked_by = 'host:134474ed9b8c pid:18' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2015-09-21 11:31:30.694648' AND (locked_at IS NULL OR locked_at < '2015-09-21 07:31:30.694715') OR locked_by = 'host:134474ed9b8c pid:18') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
11:31:32 console.1 | Loading production environment (Rails 4.2.0)
11:31:33 web.1 | [2015-09-21 11:31:33] INFO WEBrick 1.3.1
11:31:33 web.1 | [2015-09-21 11:31:33] INFO ruby 2.0.0 (2015-04-13) [x86_64-linux]
11:31:33 web.1 | [2015-09-21 11:31:33] INFO WEBrick::HTTPServer#start: pid=20 port=5200
11:31:33 rake.1 | Abort testing: Your Rails environment is running in production mode!
11:31:33 console.1 | Switch to inspect mode.
11:31:33 console.1 |
11:31:33 console.1 | exited with code 0
11:31:33 system | sending SIGTERM to all processes
11:31:33 worker.1 | [Worker(host:134474ed9b8c pid:18)] Exiting...
11:31:33 worker.1 | 2015-09-21T11:31:33+0000: [Worker(host:134474ed9b8c pid:18)] Exiting...
11:31:33 rake.1 | exited with code 1
11:31:33 web.1 | terminated by SIGTERM
11:31:36 worker.1 | SQL (1.6ms) UPDATE "delayed_jobs" SET "locked_by" = NULL, "locked_at" = NULL WHERE "delayed_jobs"."locked_by" = $1 [["locked_by", "host:134474ed9b8c pid:18"]]
11:31:36 worker.1 | exited with code 0
I really would appreciate if anyone could help catch what I'm doing wrong. Thanks.