So, this will not work with mysql_query.
I am strictly working with c++ and I am not using php.
I want this double query to be executed so that I will always have a unique ID in a transaction system with concurrent users creating IDs.
mysql_query(connection, \
"INSERT INTO User() VALUES (); SELECT LAST_INSERT_ID(); ");
It works in MySql DataBase perfectly, but I need to add it to Eclipse( I am using Ubuntu 12.04 LTS).
My application is quite big and I would not like to change to mysqli, if this is possible but if there is no other way it will be ok.
Can you help me with this? Thanks in advance.
According to the MySQL C API documentation:
MySQL 5.6 also supports the execution of a string containing multiple
statements separated by semicolon (“;”) characters. This capability is
enabled by special options that are specified either when you connect
to the server with mysql_real_connect() or after connecting by
calling` mysql_set_server_option().
And:
CLIENT_MULTI_STATEMENTS enables mysql_query() and mysql_real_query()
to execute statement strings containing multiple statements separated
by semicolons. This option also enables CLIENT_MULTI_RESULTS
implicitly, so a flags argument of CLIENT_MULTI_STATEMENTS to
mysql_real_connect() is equivalent to an argument of
CLIENT_MULTI_STATEMENTS | CLIENT_MULTI_RESULTS. That is,
CLIENT_MULTI_STATEMENTS is sufficient to enable multiple-statement
execution and all multiple-result processing.
So, you can supply several statements in a single mysql_query() call, separated by a semicolon, assuming you set up your mysql connection a bit differently, using mysql_real_connect.
You need to pass the following flag as the last argument: CLIENT_MULTI_STATEMENTS, whose documentation says:
Tell the server that the client may send multiple statements in a
single string (separated by “;”). If this flag is not set,
multiple-statement execution is disabled. See the note following this
table for more information about this flag.
See C API Support for Multiple Statement Execution and 22.8.7.53. mysql_real_connect() for mroe details.
Related
We have a situation where we are dealing with a relational source(Oracle). The system is developed in a way where we have to first execute a package which will enable data read from Oracle and user will be able to get results out of select statement. I am trying to find a way on how to implement this in informatica mapping.
What we tried
1. In PreSQL we tried to execute the package and in SQL query we wrote select statement - data not getting loaded in target.
2. In PreSQL we wrote a block in which we are executing the package and just after that(within same beging...end block) we wrote insert statement on top of select statement - This is inserting data through insert statement however I am not in favor of this solution as both source and target are dummy which will confuse people in future.
Is there any possibility to implement this solution somehow by using 1st option.
Please help and suggest.
Thanks
The stored procedure transformation is there for this purpose configure it to execute source pre load
Pre-Sql and data read are not a part of same session. From what I understand, this needs to be done within the same session as otherwise the read is granted only for the session.
What you can do, is create a stored procedure/package that will grant read access and then return the data. Use it as a SQL Override on your SQ. This way SQ will read the data as usual. The concept:
CREATE PROCEDURE ReadMyData AS
BEGIN
execute immediate 'GiveMeTheReadAccess';
select * from MyTable;
END;
And use the ReadMyData on the Source Qualifier.
I am currently developing server software in C++ with a MySQL data backend. I am using the official MySQL/connector library from Oracle to work with MySQL. The connection itself is working and I'm not having any issues with that.
My problem is that the database and the table schemas tend to change every once in a while because new tables and columns keep getting added. Also exiting column may be changed for the same reason. To make sure I recognize outdated server software quickly I wanted to add a warning when the database has changed.
My first idea was to hardcode how the database (and tables and such) should look and then check whether the current database matches the hardcoded data. But I have no clue how to achive that.
In summary I want to be able to detect whether
A table has been added or removed
A column in a table has been altered
A column in a table has been added or removed
with as little C++ code as possible. Also it should be quite easy to maintain.
Additional information will be added when required.
I would suggest the following approach:
1) fork and execute the mysql command line client. Set up a pair of pipes, to mysql's standard input and output.
2) At this point you should be able to execute simple commands by piping them to mysql via the standard input pipe, and read the output from the standard output pipe.
You will need to make careful notes as to the output format of each mysql command, so that you know when you finished reading its output, and you can send the next command.
3) As the first order of being, execute:
show tables;
The output that comes back will list all tables in the database. Parsing the output into a list of table names is trival. Then execute for each table:
show create table <tablename>;
The resulting output shows all fields in the table, its keys, and constraints. Pretty much all of this table's schema. Lather, rinse, repeat, for every table.
4) In this manner you can capture a basic schema of the entire database, for comparison purposes. If necessary, use the same approach to capture the triggers, and other objects. You'll likely need to do some minor massaging of the data, and exclude a few bits. "show create table", for example, will include the current AUTO_INCREMENT values, which you can ignore.
This general approach, of driving a mysql process via its standard input and output, is bit wobbly, of course. With a little bit of work, you can use mysql's native client library, and execute all of these commands, and capture their results, directly. This should be more reliable.
Background:
We have an application that uses the ODBC API to interact with Access and SQL Server (dynamically, depending on user's configuration).
I have discovered a bug which might be in the ODBC SQL driver, or may be a misconfiguration issue with the ODBC DSN we create, or may be a bug somehow in our code.
When a document is edited and saved, we query the database to see if this file has a corresponding record in the database - if so, we update the record with the updated data from the document; if not, we do an insert to create the necessary record for it.
We use the filename as the unique primary key on our table, and this works fine normally.
The bug is that if the filename contains characters outside of the current ANSI code page, then the select indicates no matches:
SQL: SELECT * FROM "My Designs" WHERE "PATHNAME" = '\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo' [# matches = 0]
However, when the insert is attempted, we get a unique key violation (of course) - since there already is a record with that filename.
Database error: Violation of PRIMARY KEY constraint 'PK__My Desig__1B3D5B4BF643706B'. Cannot insert duplicate key in object 'dbo.My Designs'. The duplicate key value is (\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo).
The statement has been terminated.
I've been over the code with a fine-tooth comb, and I can see nothing wrong. :(
The SQL statement that is being generated produces the correct Unicode output of the filename. Our application is compiled for Unicode. The column is SQL_WVARCHAR in ODBC speak.
I've tried adding AutoTranslate=no to the DSN configuration string, but that appears to have no effect.
I've tried logging the database connection from ODBC control panel. Sadly, that interface produces an ANSI log file - so I cannot verify UNICODE / ANSI issues using that tool.
Questions:
Is there a tool I can use to verify that these statements are being
created / issued correctly by the ODBC driver to the SQL Server
database?
Is there a better way to use ODBC so that the driver doesn't get canoodled by a simple UNICODE string in a SELECT query vs. an INSERT request?
Any other ideas for how to approach this problem (short of replacing our technology)
In the select statement, make sure you enclose the where clause string with a N to tell SQL it's unicode:
..."PATHNAME" = N'\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo'
Also, MFC converts the data to MCBS or UNICODE depending on your configuration. Make sure you use CStringT in recordset.
When performing a query that returns data, the MySQL C API allows you to specify whether you want to "use" or "store" the result set. To "use" the result set means the results are only sent from the server to the client when requested (e.g., one row is sent to the client each time that row is accessed). To "store" the result set means the entire result set is sent from the server to the client "in advance". The former requires less memory on the client, the latter more memory.
Does the PostgreSQL C API provide similar functionality?
The answer to this question can be found here:
http://www.postgresql.org/docs/current/static/libpq-single-row-mode.html
... call PQsetSingleRowMode immediately after a successful call of PQsendQuery (or a sibling function).
Note that this is only available in PostgreSQL 9.2 or greater.
How I can encode/escape a varchar to be more secure without using cfqueryparam? I want to implement the same behaviour without using <cfqueryparam> to get around "Too many parameters were provided in this RPC request. The maximum is 2100" problem. See: http://www.bennadel.com/blog/1112-Incoming-Tabular-Data-Stream-Remote-Procedure-Call-Is-Incorrect.htm
Update:
I want the validation / security part, without generating a prepared-statement.
What's the strongest encode/escape I can do to a varchar inside <cfquery>?
Something similar to mysql_real_escape_string() maybe?
As others have said, that length-related error originates at a deeper level, not within the queryparam tag. And it offers some valuable protection and therefore exists for a reason.
You could always either insert those values into a temporary table and join against that one or use the list functions to split that huge list into several smaller lists which are then used separately.
SELECT name ,
..... ,
createDate
FROM somewhere
WHERE (someColumn IN (a,b,c,d,e)
OR someColumn IN (f,g,h,i,j)
OR someColumn IN (.........));
cfqueryparam performs multiple functions.
It verifies the datatype. If you say integer, it makes sure there is an integrer, and if not, it does nto allow it to pass
It separates the data of a SQL script from the executable code (this is where you get protection from SQL injection). Anything passed as a param cannot be executed.
It creates bind variables at the DB engine level to help improve performance.
That is how I understand cfqueryparam to work. Did you look into the option of making several small calls vs one large one?
It is a security issue. Stops SQL injections
Adobe recommends that you use the cfqueryparam tag within every cfquery tag, to help secure your databases from unauthorized users. For more information, see Security Bulletin ASB99-04, "Multiple SQL Statements in Dynamic Queries," at www.adobe.com/devnet/security/security_zone/asb99-04.html, and "Accessing and Retrieving Data" in the ColdFusion Developer's Guide.
The first thing I'd be asking myself is "how the heck did I end up with more than 2100 params in a single query?". Because that in itself should be a very very big red flag to you.
However if you're stuck with that (either due to it being outwith your control, or outwith your motivation levels to address ;-), then I'd consider:
the temporary table idea mentioned earlier
for values over a certain length just chop 'em in half and join 'em back together with a string concatenator, eg:
*
SELECT *
FROM tbl
WHERE col IN ('a', ';DROP DATABAS'+'E all_my_data', 'good', 'etc' [...])
That's a bit grim, but then again your entire query sounds grim, so that might not be such a concern.
param values that are over a certain length or have stop words in them or something. This is also quite a grim suggestion.
SERIOUSLY go back over your requirement and see if there's a way to not need 2100+ params. What is it you're actually needing to do that requires all this???
The problem does not reside with cfqueryparam, but with MsSQL itself :
Every SQL batch has to fit in the Batch Size Limit: 65,536 * Network Packet Size.
Maximum size for a SQL Server Query? IN clause? Is there a Better Approach
And
http://msdn.microsoft.com/en-us/library/ms143432.aspx
The few times that I have come across this problem I have been able to rewrite the query using subselects and/or table joins. I suggest trying to rewrite the query like this in order to avoid the parameter max.
If it is impossible to rewrite (e.g. all of the multiple parameters are coming from an external source) you will need to validate the data yourself. I have used the following regex in order to perform a safe validation:
<cfif ReFindNoCase("[^a-z0-9_\ \,\.]",arguments.InputText) IS NOT 0>
<cfthrow type="Application" message="Invalid characters detected">
</cfif>
The code will force an error if any special character other than a comma, underscore, or period is found in a text string. (You may want to handle the situation cleaner than just throwing an error.) I suggest you modify this as necessary based on the expected or allowed values in the fields you are validating. If you are validating a string of comma separated integers you may switch to use a more limiting regex like "[^0-9\ \,]" which will only allow numbers, commas, and spaces.
This answer will not escape the characters, it will not allow them in the first place. It should be used on any data that you will not use with <cfqueryparam>. Personally, I have only found a need for this when I use a dynamic sort field; not all databases will allow you to use bind variables with the ORDER BY clause.