How can I programmatically dump/query Launch Services database in MacOS (i.e. analog of command lsregister -dump)?
EDIT: I want to get set of associations UTI -> Bundle_IDs. Using LSCopyAllRoleHandlersForContentType - does not always work, here a similar trouble, therefore concluded that the best working method - parsing the output of "lsregister -dump", but the location of lsregister changes from version to version.
Related
I am setting up a development server on an AWS AMI with ColdFusion 2018 and MariaDB 10.5.4
I did not find out what the current production versions were, but it is highly possible they were somewhat older. The application was launched in 2016
The code is unchanged from production, and the database is a direct backup and restore, no changes
I am getting errors in the code in the cfoutput query when it tries to format the field Named: DateStamp. This is one example of the code that errors. It is in many places:
#DateFormat(Q.DateStamp,"m/d/yyyy")# #TimeFormat(Q.DateStamp, "short")#
This is the error
"The value class java.time.LocalDateTime cannot be converted to a date"
The table in MariaDB has the datatype of DateStamp as datetime, and this is unchanged from production
I don't know why this is expecting the field to be a LocalDateTime when it is a regular DateTime. It has to be something in the configuration of this environment, but I'm having trouble understanding what. I have searched, but all I get is "how to handle LocalDateTime" type of links, so it isn't of any help as I can't change all the code when this is a test environment that must at least start with the same code as production
As per my comment on Adrian's contribution—in cases like this the answer is often found by comparing the datasource configuration between environments—most importantly whether the same database driver was chosen in both, and then the various advanced settings on the datasource, and finally any version/compatibility settings on the database server itself.
You need to find out which versions of ACF, MySQL and Java are running in production. Even if the application was launched in 2016, there isn’t any guarantee it was released on ColdFusion 2016. It could be an older version of the server.
select Q.DateStamp from where <id = a single> example looks like "2016-10-17 17:50:34"
Did you run the query in an IDE or did you run this through a cfquery? You need to make sure that it's returning as a DateTime object ({ts '2012-12-12 12:12:12'}) and not a string.
java.time.localDateTime
A date-time without a time-zone in the ISO-8601 calendar system, such as 2007-12-03T10:15:30.
MariaDB DateTime
MariaDB displays DATETIME values in 'YYYY-MM-DD HH:MM:SS.ffffff'
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false
In my local machine I use RStudio + Shiny to work properly.
Now that I have Shiny-Server installed on linux, but I do not know the Data generated by RStudiom.
how can I get Shiny-Server to read it?
Do not know what keyword query?
Thanks
Importing data in the server
As I see it, there are two ways to supply data in this situation.
The first one is to upload the data to the server where your shiny-apps are hosted. This can be done via ssh (wget) or something like FileZilla. You can put your data in the same folder as the app and then access them with relative paths. For example if you have
- app-folder
- app.R
- data.rds
- more_data.csv
You can use readRDS("data.rds") or readr::read_csv2("more_data.csv") in app.R to use the data in the app.
The second option is to use fileInput inside your app. This will give you the option to upload data from your local machine in the GUI. This data will then be put onto the server temporarilly. See ?shiny::fileInput.
Exporting data from RStudio
There are numerous ways to do this. You can use save to write your whole workspace to disk. If you just want to save single objects, saveRDS is quite handy. If you want to save datasets (for example data.frames) you can also use readr::write_csv or similar functions.
I am working with the libmysqld c library in a C++ application on windows in order to interface with an embedded mysql server i.e. a mysql server that is online for the lifetime of the process that embeds it. The application that creates the database uses a mysql .ini file for creating the datadir relative to the application directory instead of in the global mysql install folder e.g.
[libmysqld_server]
basedir=./
datadir=./Database
I can programatically create a trigger with no problem e.g.
status = mysql_query(mysql,
"CREATE TRIGGER del_trigger AFTER DELETE ON table FOR EACH ROW\
INSERT INTO otherTable (col1, col2) VALUES (OLD.col1, OLD.col2)\
");
if (status == 0) {
Log(DEBUG, "Initialize(): <%p> Delete Trigger creation passed ...", this);
}
else {
Log(DEBUG, "Initialize(): <%p> Delete Trigger creation failed with error %s...", this, mysql_error(mysql));
}
The problem I run into however is that when the trigger gets called, mysql will complain that the mysql.proc table does not exist because I do not have a mysql database inside my application specific datadir. I have tried copying the mysql folder from the installation directory in C:\Program Files\MySQL... but then I run into issues where mysql reports
Error:Cannot load from mysql.proc. The table is probably corrupted
The only advice I have seen related to the above error is to run the 'mysql_upgrade' command which does not seem to work for the case of an embedded database using its own datadir. I'm at the point where all of the tables are created and their respective triggers are setup but just can't get around this mysql.proc error.
UPDATE:
I am also seeing some inconsistent behavior here. My version of MySQL is "mysql-5.5.16-win32" and it comes with a mysql_embedded.exe binary that I can use to open up a console and point to the database files generated by my application when it isn't running. When I perform operations in the mysql_embedded.exe, the triggers work without issue (no 'mysql.proc is probably corrupted' errors). So it seems like only the libmysqld c api is having an issue with the mysql system tables.
The solution was as simple as verifying that the "mysql" database was the same version as mysql version embedded in libmysqld. I verified my client version info via the following:
const char * version = mysql_get_client_info();
This returned "5.1.44" instead of the "5.5.16" that I was expecting. Downloading the mysql ZIP archive for 5.1.44 and using the mysql database in the datadir fixed the issue that I was experiencing.
The problem:
My C++ application connects to a MySQL server, reads the first/header line of each db export.txt, makes a create table statement to prepare for the import and executes that against the database (no problem with that, the table appears just as intended) -- but when I try and execute the LOAD DATA LOCAL INFILE to import the data into the newly created table, I get the error "The used command is not allowed with this MySQL version". But, this works on the CLI! When I execute this command on the CLI using mysql -u <user> -p<password> -e "LOAD DATA LOCAL INFILE 'myfile.txt' INTO TABLE mytable FIELDS TERMINATED BY '|' LINES TERMINATED BY '\r\n';" it works flawlessly?
The Situation:
My company gets a large quantity of database exports (160 files/10gb of .txt files that are '|' delimited) from our vendors on a monthly basis that have to replace the old vendor lists. I am working on a smallish C++ app to deal with it on my work desktop. The application is meant to set up the required tables, import the data, then execute a series of intermediate queries against multiple tables to assemble information in a series of final tables, which is then itself exported and uploaded to the production environment, for use in the companies e-commerce website.
My Setup:
Ubuntu 12.04
MySQL Server v. 5.5.29 + MySQL Command Line client
Linux GNU C++ Compiler
libmysqlcppconn is installed and I have the required mysqlconn library linked in.
I have already overcome/tried the following issues/combinations:
1.) I have already discovered (the hard way) that LOAD DATA [LOCAL] INFILE statements must be enabled in the config -- I have the "local-infile" option set in the configuration files for both client and server. (fixed by updating the /etc/mysql/my.cnf with "local-infile" statements for the client and server. NOTE: I could have used the --local-infile=1 to restart the mysql-server, but this is my local dev environment so I just wanted it turned on permanently)
2.) LOAD DATA LOCAL INFILE seems to fail to perform the import (from the CLI) if the target import file does not have execute permissions enabled (fixed with chmod +x target_file.txt)
3.) I am using the mysql root account in my application code (because its my localhost, not production and this particular program will never run on a production server.)
4.) I have tried executing my compiled binary program using the sudo command (no change, same error "The used command is not allowed with this MySQL version")
5.) I have tried changing the ownership of the binary file from my normal login to root (no change, same error "The used command is not allowed with this MySQL version")
6.) I know the libcppmysqlconn is working because I am able to connect and perform the CREATE TABLE call without a problem, and I can do other queries and execute statements
What am I missing? Any suggestions? Thanks in advance :)
After much diligent trial and error working with the /etc/mysql/my.cfg file (I know this is a permissions issue because it works on the command line, but not from the connector) and after much googling and finding some back alley tech support posts I've come to conclude that the MySQL C++ connector did not (for whatever reason) decide to implement the ability for developers to be able to allow the local-infile=1 option from the C++ connector.
Apparently some people have been able to hack/fork the MySQL C++ connector to expose the functionality, but no one posted their source code -- only said it worked. Apparently there is a workaround in the MySQL C API after you initialize the connection you would use this:
mysql_options( &mysql, MYSQL_OPT_LOCAL_INFILE, 1 );
which apparently allows the LOAD DATA LOCAL INFILE statements to work with the MySQL C API.
Here are some reference articles that lead me to this conclusion:
1.) How can I get the native C API connection structure from MySQL Connector/C++?
2.) Mysql 5.5 LOAD DATA INFILE Permissions
3.) http://osdir.com/ml/db.mysql.c++/2004-04/msg00097.html
Essentially if you want the ability to use the LOAD DATA LOCAL INFILE functionality from a programmatic Connector API -- you have to use the mysql C API or hack/fork the existing mysql C++ api to expose the connection structure. Or just stick to executing the LOAD DATA LOCAL INFILE from the command line :(