clojure/lein repl does not work with all database loaded - clojure

I have used lein repl commond to test some operation on the database of my project. But I was not able to connect to it.
Then I found that the issue is database was not getting loaded. The only solution i was able to try out was:
lein run
this resulted in following messages:
2018-04-24 12:23:07,397 [main] INFO guestbook.core - #'guestbook.db.core/*db* started
2018-04-24 12:23:07,398 [main] INFO guestbook.core - #'guestbook.handler/init-app started
2018-04-24 12:23:07,398 [main] INFO guestbook.core - #'guestbook.handler/app started
2018-04-24 12:23:07,398 [main] INFO guestbook.core - #'guestbook.core/http-server started
2018-04-24 12:23:07,398 [main] INFO guestbook.core - #'guestbook.core/repl-server started
then I ran the following command :
lein repl :connect 7000
this connected to database and started repl. next commands worked fine:
user=> (use 'guestbook.db.core)
nil
user=> (get-messages)
nil
Please let me know if there is any other way too?

Related

EMR Core nodes are not taking up map reduce jobs

I have a 2 node EMR (Version 4.6.0) Cluster (1 master (m4.large) , 1 core (r4.xlarge) ) with HBase installed. I'm using default EMR configurations. I want to export HBase tables using
hbase org.apache.hadoop.hbase.mapreduce.Export -D hbase.mapreduce.include.deleted.rows=true Table_Name hdfs:/full_backup/Table_Name 1
I'm getting the following error
2022-04-04 11:29:20,626 INFO [main] util.RegionSizeCalculator: Calculating region sizes for table "Table_Name".
2022-04-04 11:29:20,900 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
2022-04-04 11:29:20,900 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x17ff27095680070
2022-04-04 11:29:20,903 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x17ff27095680070
2022-04-04 11:29:20,904 INFO [main] zookeeper.ZooKeeper: Session: 0x17ff27095680070 closed
2022-04-04 11:29:20,980 INFO [main] mapreduce.JobSubmitter: number of splits:1
2022-04-04 11:29:20,994 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2022-04-04 11:29:21,192 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1649071534731_0002
2022-04-04 11:29:21,424 INFO [main] impl.YarnClientImpl: Submitted application application_1649071534731_0002
2022-04-04 11:29:21,454 INFO [main] mapreduce.Job: The url to track the job: http://ip-10-0-2-244.eu-west-1.compute.internal:20888/proxy/application_1649071534731_0002/
2022-04-04 11:29:21,455 INFO [main] mapreduce.Job: Running job: job_1649071534731_0002
2022-04-04 11:29:28,541 INFO [main] mapreduce.Job: Job job_1649071534731_0002 running in uber mode : false
2022-04-04 11:29:28,542 INFO [main] mapreduce.Job: map 0% reduce 0%
It is stuck at this progress and not running. However when I add a task node and redo the same command, it gets finished within seconds.
Based on the documentation, https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-master-core-task-nodes.html , core node itself should handle tasks as well. What could be going wrong?

VS Code use Jupyter Notebook behind proxy

I referred to this question but I have a different question
Get VS Code Python extension to connect to Jupyter running on remote AWS EMR master node
Environment:
The Jupyter Notebook is on AWS EMR.
To access notebook from the browser using a SOCKS5 proxy. To do so, I have to connect to the VPN to work, SSH using PuTTY using .ppk file and Tunnel (Dynamic Port Forwarding).
To automate the step above, I am using PLink(Homepage | Docs | Download) : c:\stuff\plink.exe --ssh -i c:\stuff\file.ppk -D XXXX user-name#<some_IP_address
I can successfully enable proxy by using msedge.exe --proxy-server="socks5://<address>" or chrome.exe --proxy-server="socks5://<address>"
VS Code Version:
C:\Users\ablaze>code --version
1.55.2
3c4e3df9e89829dce27b7b5c24508306b151f30d
x64
Objective:
How can I access the remote Jupyter Notebook hosted on AWS EMR behind a proxy from Visual Studio Code?
I tried and failed:
Given Visual Studio Code is built on top of Electron and benefits from all the networking stack capabilities of Chromium.
I used Plink like above to SSH and then executed the following command on my windows command prompt: C:\Users\ablaze>code --proxy-server="socks5://<address>" --verbose
Error message:
[17668:0505/104410.116:WARNING:dns_config_service_win.cc(692)] Failed to read DnsConfig.
[main 2021-05-05T14:44:10.224Z] Starting VS Code
[main 2021-05-05T14:44:10.224Z] from: c:\Users\ablaze\AppData\Local\Programs\Microsoft VS Code\resources\app
[main 2021-05-05T14:44:10.224Z] args: {
_: [],
...
'no-proxy-server': false,
'proxy-server': 'socks5://localhost:8088',
...
logsPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\logs\\20210505T104410'
}
...
[main 2021-05-05T14:44:10.258Z] windowsManager#open pathsToOpen [
{
backupPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\Backups\\1620224957079',
remoteAuthority: undefined
}
]
To double check I also opened the log file C:\Users\ablaze\AppData\Roaming\Code\logs\20210505T104410\main.log
...
[2021-05-05 10:44:40.345] [main] [trace] update#checkForUpdates, state = idle
[2021-05-05 10:44:40.345] [main] [info] update#setState checking for updates
[2021-05-05 10:44:40.345] [main] [trace] RequestService#request https://update.code.visualstudio.com/api/update/win32-x64-user/stable/3c4e3df9e89829dce27b7b5c24508306b151f30d
[2021-05-05 10:44:40.346] [main] [trace] resolveShellEnv(): skipped (Windows)
[2021-05-05 10:44:44.354] [main] [error] Error: net::ERR_PROXY_CONNECTION_FAILED
at SimpleURLLoaderWrapper.<anonymous> (electron/js2c/browser_init.js:109:6508)
at SimpleURLLoaderWrapper.emit (events.js:315:20)
[2021-05-05 10:44:44.354] [main] [info] update#setState idle

django-q qcluster starts and exits while running in Pycharm Debug

I'm running a Django project with django-q in PyCharm. manage.py runserver is running in one instance, and manage.py qcluster is running in another. qcluster starts up fine, then immediately exits gracefully. Here is the full text:
/Users/user/PycharmProjects/project/venv/bin/python /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py --multiproc --qt-support=auto --client 127.0.0.1 --port 65362 --file /Users/user/PycharmProjects/project/manage.py qcluster --settings=project.settings.dev
Connected to pydev debugger (build 193.6494.30)
pydev debugger: process 21339 is connecting
16:03:44 [Q] INFO Q Cluster grey-kentucky-georgia-avocado starting.
16:03:44 [Q] INFO Process-1 guarding cluster grey-kentucky-georgia-avocado
16:03:44 [Q] INFO Q Cluster grey-kentucky-georgia-avocado running.
16:03:44 [Q] INFO Process-1:1 ready for work at 21343
16:03:44 [Q] INFO Process-1:2 ready for work at 21344
16:03:44 [Q] INFO Process-1:3 ready for work at 21345
16:03:44 [Q] INFO Process-1:4 ready for work at 21346
16:03:44 [Q] INFO Process-1:5 ready for work at 21347
16:03:44 [Q] INFO Process-1:6 monitoring at 21348
16:03:44 [Q] INFO Process-1:7 pushing tasks at 21349
16:03:44 [Q] INFO Q Cluster grey-kentucky-georgia-avocado stopping.
16:03:44 [Q] INFO Process-1 stopping cluster processes
16:03:45 [Q] INFO Process-1:7 stopped pushing tasks
16:03:46 [Q] INFO Process-1:1 stopped doing work
16:03:46 [Q] INFO Process-1:2 stopped doing work
16:03:46 [Q] INFO Process-1:3 stopped doing work
16:03:46 [Q] INFO Process-1:4 stopped doing work
16:03:46 [Q] INFO Process-1:5 stopped doing work
16:03:47 [Q] INFO Process-1 waiting for the monitor.
16:03:47 [Q] INFO Process-1:6 stopped monitoring results
16:03:47 [Q] INFO Q Cluster grey-kentucky-georgia-avocado has stopped.
Process finished with exit code 0
Obviously I would like it to stay running indefinitely. If I run it from PyCharm's manage.py terminal it operates as expected. manage.py runserver runs as expected.
My versions:
Python 3.7
Django 3.0.3
django-q 1.2.1
PyCharm 2019.3.3 Pro
To run django_q in PyCharm Debug, I had to open Settings->Build, Execution, Deployment->Python Debugger, and select Gevent compatible. I found the solution from this issue: https://github.com/Koed00/django-q/issues/367.

I can connect to AWS RDS via sqldeveloper but can't by Java application

It is so werid that I can connect to AWS RDS with sqldeveloper but can't with my java application(java source code or jsp)
When I try to access to RDS, there are errors like:
coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
26-Jun-2018 04:24:33.203 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
26-Jun-2018 04:24:33.212 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
26-Jun-2018 04:24:33.215 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
26-Jun-2018 04:24:33.219 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 1387 ms
26-Jun-2018 04:24:33.265 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
26-Jun-2018 04:24:33.266 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.50
26-Jun-2018 04:24:33.286 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /var/lib/tomcat8/webapps/ROOT
26-Jun-2018 04:24:35.020 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
26-Jun-2018 04:24:35.097 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 1,811 ms
26-Jun-2018 04:24:35.100 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
26-Jun-2018 04:24:35.106 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
26-Jun-2018 04:24:35.108 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 1888 ms
Loading driver...
Driver loaded!
jdbc:oracle:thin://IP:1521/ORCL?user=username&password=password
SQLException: Invalid Oracle URL specified
SQLState: 99999
VendorError: 17067
Closing the connection.
SQLException: Invalid Oracle URL specified
SQLState: 99999
VendorError: 17067
Closing the connection.
But the URL is just the same value as I tried with sqldeveloper.
Is there anything wrong?
Please enlighten me since I've been suffering for this about a week! :(
I'm not sure how your application is set up, but I'm using Maven & Spring Boot and I got it working like this:
I mainly followed this guide, ignoring the .sql files, thymeleaf UI, "model.addAttribute("cities", cities);" part, and the html file:
https://zetcode.com/springboot/postgresql/
My application.properties file looks like this
postgres.comment.aa=https://zetcode.com/springboot/postgresql/
spring.main.banner-mode=off
logging.level.org.springframework=ERROR
spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialization-mode=always
spring.datasource.platform=postgres
spring.datasource.url=jdbc:postgresql://your-rds-url-here.us-east-1.rds.amazonaws.com:yourDbPortHere/postgres
spring.datasource.username=postgres
spring.datasource.password=<your db password here>
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
If you have custom schemas, you can append "?currentSchema=users" to the url:
spring.datasource.url=jdbc:postgresql://your-rds-url-here.us-east-1.rds.amazonaws.com:yourDbPortHere/postgres?currentSchema=users
Thanks to this SO answer for the schema:
Is it possible to specify the schema when connecting to postgres with JDBC?
These other couple links also helped
https://turreta.com/2015/03/01/how-to-specify-a-default-schema-when-connecting-to-postgresql-using-jdbc/
https://doc.cuba-platform.com/manual-latest/db_schema_connection.html

Unable to load cookbooks

I'm following Amazon's tutorial for installing IIS on AWS using Chef.
The problem is that when I execute the custom cookbook I see the following:
# Logfile created on 2016-03-31 01:32:10 +0000 by logger.rb/41954
[2016-03-31T01:32:10+00:00] INFO: Started chef-zero at http://localhost:8889 with repository at C:/chef
One version per cookbook
data_bags at C:/chef/runs/3011e6d2-9695-4cc2-97d0-b509c34c0b64/data_bags
nodes at C:/chef/runs/3011e6d2-9695-4cc2-97d0-b509c34c0b64/nodes
[2016-03-31T01:32:15+00:00] INFO: *** Chef 12.2.1 ***
[2016-03-31T01:32:15+00:00] INFO: Chef-client pid: 2584
[2016-03-31T01:32:48+00:00] INFO: Setting the run_list to [] from CLI options
[2016-03-31T01:32:48+00:00] INFO: Run List is []
[2016-03-31T01:32:48+00:00] INFO: Run List expands to []
[2016-03-31T01:32:48+00:00] INFO: Starting Chef Run for iis-app-server
[2016-03-31T01:32:48+00:00] INFO: Running start handlers
[2016-03-31T01:32:48+00:00] INFO: Start handlers complete.
[2016-03-31T01:32:48+00:00] INFO: HTTP Request Returned 404 Not Found : Object not found: /reports/nodes/iis-app-server/runs
[2016-03-31T01:32:48+00:00] INFO: Loading cookbooks []
[2016-03-31T01:32:48+00:00] WARN: Node iis-app-server has an empty run list.
[2016-03-31T01:32:49+00:00] INFO: Chef Run complete in 0.218745 seconds
[2016-03-31T01:32:49+00:00] INFO: Running report handlers
[2016-03-31T01:32:49+00:00] INFO: Report handlers complete
It's unable to load the cookbooks for some reason.
The log you shown is the second Chef run, which should be empty for update_custom_cookbooks command. If you check the log, the update custom cookbooks should be in the first chef run at the top of your log.