I have rowversion value that comes from Database in binary format. I need to convert that value to string in order to pass in my front-end code. Then when user submits data back to the server I need to convert that string back to binary. Here is example of my data:
Binary 00000010586
Above output is what I see when my query result returns value. Then I tried this:
Encoded value looks like this: iV
Then I tried to decode value back and this is what I have used:
#charsetDecode( local.encodedVal, "utf-8" )#
Then I got this error message: ByteArray objects cannot be converted to strings.
In my database row version column has timestamp type. When query returns that value it's represented as binary in ColdFusion. I use this column as unique id for each row set in my table. Is there a way to handle this conversion in CldFusion adn what would be the best approach?
Your working with binary data, and not with string encodings. You will need to use binaryEncoded and binaryDecode to send your data as a string and convert it back to binary.
The following example converts some binary data into 2 string representations and converts them back into the same byte array with binaryDecodein the dump.
<cfscript>
binaryData = toBinary("SomeBinaryData++");
hexEncoded = binaryEncode(binaryData, "hex");
base64Encoded = binaryEncode(binaryData, "base64");
writeDump([
binaryData,
hexEncoded,
base64Encoded,
binaryDecode(hexEncoded, "hex"),
binaryDecode(base64Encoded, "base64")
]);
</cfscript>
Run the example on TryCF.com
Related
I have a table that has a text field which has formatted strings that represent money.
For example, it will have values like this, but also have "bad" invalid data as well
$5.55
$100050.44
over 10,000
$550
my money
570.00
I want to convert this to a numeric field but maintain the actual numbers that can be retained, and for any that can't , convert to null.
I was using this function originally which did convert clean numbers (numbers that didn't have any formatting). The issue was that it would not convert $5.55 as an example and set this to null.
CREATE OR REPLACE FUNCTION public.cast_text_to_numeric(
v_input text)
RETURNS numeric
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
declare v_output numeric default null;
begin
begin
v_output := v_input::numeric;
exception when others then return null;
end;
return v_output;
end;
$BODY$;
I then created a simple update statement which removes the all non digit characters, but keeps the period.
update public.numbertesting set field_1=regexp_replace(field_1,'[^\w.]','','g')
and if I run this statement, it correctly converts the text data to numeric and maintains the number:
alter table public.numbertesting
alter column field_1 type numeric
using field_1::numeric
But I need to use the function in order to properly discard any bad data and set those values to null.
Even after I run the clean up to set the text value to say 5.55
my "cast_text_to_numeric" function STILL sets this to null ? I don't understand why this sets it to null, but the above statement correctly converts it to a proper number.
How can I fix my cast_text_to_numeric function to properly convert values such as 5.55 , etc?
I'm ok with disgarding (setting to NULL) any values that don't end up with numbers and a period. The regular expression will strip out all other characters... and if there happens to be two numbers in the text field, with the script, they would be combined into one (spaces are removed) and I'm good with that.
In the example of data above, after conversion, the end result in numeric field would be:
5.55
100050.44
null
550
null
570.00
FYI, I am on Postgres 11 right now
I am attempting to use ColdFusion's hmac() function to calculate an HMAC value using binaryEncode(binaryObj,'Base64') instead of toBase64() since that function is deprecated. It works perfectly with toBase64() but not with binaryEncode(). The docs are not very informative. Can someone help me understand why I cannot get the same value using binaryEncode?
From what I understand, the hmac() function returns the results in hexadecimal format. binaryEncode() expects a binary value, so thehmac() results must be first converted from hex to binary, before it can be converted to base64.
<cfset string = "1234567890" />
<cfset secretKey = "abcdefghijklmnopqrstuvwxyz" />
<!--- Get Hex results from HMAC() --->
<cfset hmacHex = hmac(string,secretKey,'HMACSHA256') />
<!--- Decode the binary value from hex --->
<cfset hmacAsBinary = binaryDecode(hmacHex,'hex') />
<!--- Convert binary object to Base64 --->
<cfset hmacBase64 = binaryEncode(hmacAsBinary, 'base64') />
<cfoutput>
<!--- incorrect hmac signature --->
hmacBase64: #hmacBase64#<br>
<!--- correct hmac signature --->
toBase64: #toBase64(hmac(string,secretKey,'HMACSHA256'))#<br>
</cfoutput>
The results are:
hmacBase64: VEVGNnqg9b0eURaDCsA4yIOz5c+QtoJqIPInEZOuRm4=
toBase64: NTQ0NTQ2MzY3QUEwRjVCRDFFNTExNjgzMEFDMDM4Qzg4M0IzRTVDRjkwQjY4MjZBMjBGMjI3MTE5M0FFNDY2RQ==
One thing I noticed is the results are much longer when using toBase64(). I can't seem to figure out why I can't use binaryEncode(). However, I would like to, since toBase64() is being deprecated. Any insight is much appreciated. Thanks!
Update based on comments:
Well using ToBase64(Hmac(...)) is not the correct way to convert a hex string to base64 ;-) However, it sounds like the API requires something other than a straight conversion. If so, just do what the ToBase64(hmac(...)) code is doing. ie Decode the hex string as UTF8 and re-encode it as base64:
matchingResult = binaryEncode(charsetDecode(hmacHex, "utf-8"), "base64")
Short answer:
The two methods are encoding totally different values. That is why the results do not match. The correct way to convert the hex string to base64 is using BinaryEncode/Decode().
Longer answer:
<!--- correct hmac signature --->
toBase64: #toBase64(hmac(string,secretKey,'HMACSHA256'))#<br>
Actually that is not the correct way to convert hex to base64.
Hexadecimal and Base64 are just different ways of representing a binary value. In order to get the same results, the two methods need to start with the same binary. In this case, are actually encoding totally different values. Hence the difference in the results.
With a hexadecimal string, each byte is represented by two characters. So the binary will be half the size of the original string. In the case of HMAC(HMACSHA256), the resulting hex string is 64 characters long. So the binary value should be 32 bytes. To obtain the correct binary value, the string must be decoded as hex:
original string length = #len(hmacHex)#
binary size = #arrayLen(binaryDecode(hmacHex, "hex"))#
The problem with ToBase64 is that it decodes the string incorrectly. It treats the input as UTF8 and decodes the characters in the string individually. So the binary value is double the size it should be. Notice it is 64 bytes, instead of 32? That is why the final string is longer as well.
UTF8 binary size = #arrayLen(charsetDecode(hmacHex, "utf-8"))#
ToBase64 binary size = #arrayLen(binaryDecode(toBase64(hmacHex), "base64"))#
So again, the two methods produce different results because they are encoding totally different values. However, strictly speaking, only the first method is correct. To re-encode a hex string as base64 use binaryEncode/binaryDecode:
correctResult = binaryEncode(binaryDecode(hmacHex, "hex"), "base64")
I'm using Python to read values from SQL Server (pypyodbc) and insert them into PostgreSQL (psycopg2)
A value in the NAME field has come up that is causing errors:
MontaƱo
The value is existing in my MSSQL database just fine (SQL_Latin1_General_CP1_CI_AS encoding), and can be inserted into my PostgreSQL database just fine (UTF8) using PGAdmin and an insert statement.
The problem is selecting it using python causes the value to be converted to:
Monta\xf1o
(xf1 is ASCII for 'Latin small letter n with tilde')
...which is causing the following error to be thrown when trying to insert into PostgreSQL:
invalid byte sequence for encoding "UTF8": 0xf1 0x6f 0x20 0x20
Is there any way to avoid the conversion of the input string to the string that is causing the error above?
Under Python_2 you actually do want to perform a conversion from a basic string to a unicode type. So, if your code looks something like
sql = """\
SELECT NAME FROM dbo.latin1test WHERE ID=1
"""
mssql_crsr.execute(sql)
row = mssql_crsr.fetchone()
name = row[0]
then you probably want to convert the basic latin1 string (retrieved from SQL Server) to the type unicode before using it as a parameter to the PostgreSQL INSERT, i.e., instead of
name = row[0]
you would do
name = unicode(row[0], 'latin1')
I currently have a string of values which I retrieved after filtering through data from a csv file. ultimately I had to do some filtering of the data but I have the same numbers as a list, dataframe, or array. I just need to take the numbers in the string and convert them to hex and then take the first 8 numbers of the hex and convert that to dec for each element in the string. Lastly I also need to convert the last 8 of the same hex and then to dec as well for each value in the string.
I cannot provide a snippet because it is sensitive data, but here is an example.
I basically have something like this
>>> list_A
[52894036, 78893201, 45790373]
If I convert it to a dataframe and call df.dtypes, it says dtype: object and I can convert the values of Column A to bool, int, or string, but the dtype is always an object.
It does not matter whether it is a function, or just a simple loop. I have been trying many methods and am unable to attain the results I need. But ultimately the data is taken from different csv files and will never be the same values or list size.
Pandas is designed to work primarily with integers and floats, with no particular facilities for hexadecimal that I know of, but you can use apply to access standard python conversion functions like hex and int:
df=pd.DataFrame({ 'a':[52894036999, 78893201999, 45790373999] })
df['b'] = df['a'].apply( hex )
df['c'] = df['b'].apply( int, base=0 )
Results:
a b c
0 52894036999 0xc50baf407 52894036999
1 78893201999 0x125e66ba4f 78893201999
2 45790373999 0xaa951a86f 45790373999
Note that this answer is for Python 3. For Python 2 you may need to strip off the trailing "L" in column "b" with str[:-1].
I have a problem when I insert a JSON string in MongoDB a C++ function. I am basically creating a big std::string formatted as a JSON and I put my values in it.
I have some accents in strings in the data I put in the JSON and I get an error when I try to see the document correctly in the DB after.
This is my update/insert code
mongodb_client_connector.update
(
mongodb_database+"."+MONGODB_COLLECTION,
Query(BSON(MONGODB_ID << OID(param_oid))),
fromjson(The_JSON_I_Wrote)
);
This is the result:
How do I format the string correctly so I get the accents?