Trying to use ColdFusion to create HMAC-SHA1 hash for API authentication - coldfusion

I am at my wit's end on this one, I just can't find the right combination of code to make this work. I'm trying to create an authentication digest for an API query. I've tried many CFML functions (for example: Coldfusion HMAC-SHA1 encryption and HMAC SHA1 ColdFusion), but I'm not coming up with the same results that are cited in the API documentation. Here's that example (basically elements of the request header with line breaks as delimiters.):
application/xml\nTue, 30 Jun 2009 12:10:24 GMT\napi.summon.serialssolutions.com\n/2.0.0/search\ns.ff=ContentType,or,1,15&s.q=forest\n
and here's the key:
ed2ee2e0-65c1-11de-8a39-0800200c9a66
which according to the documentation should result in:
3a4+j0Wrrx6LF8X4iwOLDetVOu4=
when the HMAC hash is converted to Base64. Any ideas would be most appreciated!

The problem is your input string, not the functions. The first one works fine. Though I would change the charset to UTF-8, or make it an argument. Otherwise, the results are dependent on the jvm default, which may not always be correct, and can change which would break the code.
Verify you are constructing the sample string correctly. Are you using chr(10) for new lines? Note: It must also end with a new line.
Code:
<cfscript>
headers = [ "application/xml"
, "Tue, 30 Jun 2009 12:10:24 GMT"
, "api.summon.serialssolutions.com"
, "/2.0.0/search"
, "s.ff=ContentType,or,1,15&s.q=forest"
];
theText = arrayToList(headers, chr(10)) & chr(10);
theKey = "ed2ee2e0-65c1-11de-8a39-0800200c9a66";
theHash = binaryEncode( hmacEncrypt(theKey, theText), "base64");
writeDump(theHash);
</cfscript>
Result:
3a4+j0Wrrx6LF8X4iwOLDetVOu4=

Related

Extra string & pipe character in Laravel Cookies

In a Laravel 6x project I'm working on I'm setting a cookie with:
Cookie::queue('remember_me', json_encode(['uid' => $user->id, 'token' => $token]),2628000);
I'm reading the cookie and decrypting it with:
$cookies = Crypt::decrypt(Cookie::get('remember_me'),false);
This works well, except that the value of $cookies has an extra pre-pended string and a | delimiter in it:
e80cd502fec2a621b624ead8eb1cc91a2e94846b|{"uid":872,"token":"l1214065120208k"}
I can work with that obviously to get what I need but I have been unable to find anything on why that string and | are being prepended to the cookie. Any explanation or documentation link?
I did find another thread here with a similar question but no answer:
How to decrypt cookies in Laravel 8
I also found a thread suggesting that Laravel 8 adds the session_id to the cookie string. Is that what I'm seeing here?
Thanks,
Michael
This value looks to be an HMAC-SHA1 of the cookie name with v2 appended to the end.
This logic is implemented in the CookieValuePrefix class in Laravel and the code looks like so:
public static function create($cookieName, $key)
{
return hash_hmac('sha1', $cookieName.'v2', $key).'|';
}
This is used in the EncryptCookies middleware when encrypting and decrypting accordingly. The relevant source code is:
// in decrypt() function
$hasValidPrefix = strpos($value, CookieValuePrefix::create($key, $this->encrypter->getKey())) === 0;
$request->cookies->set(
$key, $hasValidPrefix ? CookieValuePrefix::remove($value) : null
);
// in encrypt() function
$this->encrypter->encrypt(
CookieValuePrefix::create($cookie->getName(), $this->encrypter->getKey()).$cookie->getValue(),
static::serialized($cookie->getName())
)
I put this logic into a CyberChef page here to test it out with some local cookies I had and verify the output matches and it did. If you go there and plug in your app key (preferable a disposable one) you should see it output the hash value you have in your question.

R: Countrycode package not supporting regex as the origin

I have a list of countries that i need to convert into standardized format (iso3c). Some have long names, others have 2 or 3 digit codes, and others do not display the whole country name like "Africa" instead of "South Africa". Ive done some research and come up to use countrycode package in R. However, when i tried to use "regex" R doesnt seem to recognize it. Im getting the error below:
> countrycode(data,"regex","iso3c", warn = TRUE)
Error in countrycode(data, "regex", "iso3c", :
Origin code not supported
Any other option I need to do?
Thanks!
You can view the README for the countrycode package here https://github.com/vincentarelbundock/countrycode, or you can pull up the help file in R by entering this into your R console ?countrycode::countrycode.
"regex" is not a valid 'origin' value (2nd argument in the countrycode() function). You must use one of "cowc", "cown", "eurostat", "fao", "fips105", "imf", "ioc", "iso2c", "iso3c", "iso3n", "p4_ccode", "p4_scode", "un", "wb", "wb_api2c", "wb_api3c", "wvs", "country.name", "country.name.de" (using latest version 0.19).
If you use either of the following 'origin' values, regex matching will be performed automatically: "country.name" or "country.name.de"
If you're using a custom dictionary with the new (as of version 0.19) custom_dict argument, you must set the origin_regex argument to TRUE for regex matching to occur.
In your example, this should do what you want:
countrycode(data, origin = "country.name", destination = "iso3c", warn = TRUE)

How to support Chinese in http request body?

URL = http://example.com,
Header = [],
Type = "application/json",
Content = "我是中文",
Body = lists:concat(["{\"type\":\"0\",\"result\":[{\"url\":\"test.cn\",\"content\":\"", unicode:characters_to_list(Content), "\"}]}"]),
lager:debug("URL:~p, Body:~p~n", [URL, Body]),
HTTPOptions = [],
Options = [],
Response = httpc:request(post, {URL, Header, Type, Body}, HTTPOptions, Options),
The http request body received by http server is not 我是中文。How do I fix this issue?
Luck of the Encoding
You must take special care to ensure input is what you think it is because it may differ from what you expect.
This answer applies to the Erlang release that I'm running which is R16B03-1. I'll try to get all of the details in here so you can test with your own install and verify.
If you don't take specific action to change it, a string will be interpreted as follows:
In the Terminal (OS X 10.9.2)
TerminalContent = "我是中文",
TerminalContent = [25105,26159,20013,25991].
In the terminal the string is interpreted as a list of unicode characters.
In a Module
BytewiseContent = "我是中文",
BytewiseContent = [230,136,145,230,152,175,228,184,173,230,150,135].
In a module, the default encoding is latin1 and strings containing unicode characters are interpreted bytewise lists (of UTF8 bytes).
If you use data encoded like BytewiseContent, unicode:characters_to_list/1 will double-encode the Chinese characters and ææ¯ä will be sent to the server where you expected 我是中文.
Solution
Specify the encoding for each source file and term file.
If you run an erl command line, ensure it is setup to use unicode.
If you read data from files, translate the bytes from the bytewise encoding to unicode before processing (this goes for binary data acquired using httpc:request/N as well).
If you embed unicode characters in your module, ensure that you indicate as much by commenting within the first two lines of your module:
%% -*- coding: utf-8 -*-
This will change the way the module interprets the string such that:
UnicodeContent = "我是中文",
UnicodeContent = [25105,26159,20013,25991].
Once you have ensured that you are concatenating characters and not bytes, the concatenation is safe. Don't use unicode:characters_to_list/1 to convert your string/list until the whole thing has been built up.
Example Code
The following function works as expected when given a Url and a list of unicode character Content:
http_post_content(Url, Content) ->
ContentType = "application/json",
%% Concat the list of (character) lists
Body = lists:concat(["{\"content\":\"", Content, "\"}"]),
%% Explicitly encode to UTF8 before sending
UnicodeBin = unicode:characters_to_binary(Body),
httpc:request(post,
{
Url,
[], % HTTP headers
ContentType, % content-type
UnicodeBin % the body as binary (UTF8)
},
[], % HTTP Options
[{body_format,binary}] % indicate the body is already binary
).
To verify results I wrote the following HTTP server using node.js and express. The sole purpose of this dead-simple server is to sanity check the problem and solution.
var express = require('express'),
bodyParser = require('body-parser'),
util = require('util');
var app = express();
app.use(bodyParser());
app.get('/', function(req, res){
res.send('You probably want to perform an HTTP POST');
});
app.post('/', function(req, res){
util.log("body: "+util.inspect(req.body, false, 99));
res.json(req.body);
});
app.listen(3000);
Gist
Verifying
Again in Erlang, the following function will check to ensure that the HTTP response contains the echoed JSON, and ensures the exact unicode characters were returned.
verify_response({ok, {{_, 200, _}, _, Response}}, SentContent) ->
%% use jiffy to decode the JSON response
{Props} = jiffy:decode(Response),
%% pull out the "content" property value
ContentBin = proplists:get_value(<<"content">>, Props),
%% convert the binary value to unicode characters,
%% it should equal what we sent.
case unicode:characters_to_list(ContentBin) of
SentContent -> ok;
Other ->
{error, [
{expected, SentContent},
{received, Other}
]}
end;
verify_response(Unexpected, _) ->
{error, {http_request_failed, Unexpected}}.
The complete example.erl module is posted in a Gist.
Once you've got the example module compiled and an echo server running you'll want to run something like this in an Erlang shell:
inets:start().
Url = example:url().
Content = example:content().
Response = example:http_post_content(Url, Content).
If you've got jiffy set up you can also verify the content made the round trip:
example:verify_response(Response, Content).
You should now be able to confirm round-trip encoding of any unicode content.
Translating Between Encodings
While I explained the encodings above you will have noticed that TerminalContent, BytewiseContent, and UnicodeContent are all lists of integers. You should endeavor to code in a manner that allows you to be certain what you have in hand.
The oddball encoding is bytewise which may turn up when working with modules that are not "unicode aware". Erlang's guidance on working with unicode mentions this near the bottom under the heading Lists of UTF-8 Bytes. To translate bytewise lists use:
%% from http://www.erlang.org/doc/apps/stdlib/unicode_usage.html
utf8_list_to_string(StrangeList) ->
unicode:characters_to_list(list_to_binary(StrangeList)).
My Setup
As far as I know, I don't have local settings that modify Erlang's behavior. My Erlang is R16B03-1 built and distributed by Erlang Solutions, my machine runs OS X 10.9.2.

Authenticating DotNetNuke Users in ColdFusion

Is there any way to authenticate users from other web apps using the DNN logins?
We have a main site that is using DNN and user logins are stored in the asp net membership table. From what I have been reading, the passwords are encrypted using the machine key and then salted. I see where this info is, but can't seem to encrypt passwords correctly using this method.
I'm trying with a Coldfusion web application on the same server where our DNN site is, but it doesn't want to work. You'd think it would be strait forward with the ColdFusion encryption function:
Encrypt(passwordstring, key [, algorithm, encoding, IVorSalt, iterations])
No matter what I try, I never get a matching value.
Any help, insight or pointing me in the right direction would be greatly appreciated!
(Edit: Original answer did not work in all cases. Substantially revised ...)
From what I have read, DNN uses an "SHA1" hash by default. The thread #barnyr posted shows it simply hashes the concatenated salt and password, but with a few twists.
DNN uses UTF-16LE to extract the password bytes, rather than CF's typical UTF-8.
It also extracts the salt and password bytes separately, which may produce different results than just decoding everything as a single string, which is what hash() does. (See demo below)
Given that CF9's Hash function does not accept binary (supported in CF11), I do not think it is possible to duplicate the results with native CF functions alone. Instead I would suggest decoding the strings into binary, then using java directly:
Code:
<cfscript>
thePassword = "DT!#12";
base64Salt = "+muo6gAmjvvyy5doTdjyaA==";
// extract bytes of the salt and password
saltBytes = binaryDecode(base64Salt, "base64");
passBytes = charsetDecode(thePassword, "UTF-16LE" );
// next combine the bytes. note, the returned arrays are immutable,
// so we cannot use the standard CF tricks to merge them
ArrayUtils = createObject("java", "org.apache.commons.lang.ArrayUtils");
dataBytes = ArrayUtils.addAll( saltBytes, passBytes );
// hash binary using java
MessageDigest = createObject("java", "java.security.MessageDigest").getInstance("SHA-1");
MessageDigest.update(dataBytes);
theBase64Hash = binaryEncode(MessageDigest.digest(), "base64");
WriteOutput("theBase64Hash= "& theBase64Hash &"<br/>");
</cfscript>
Demo of Differences:
<cfscript>
theEncoding = "UTF-16LE";
thePassword = "DT!#12";
base64Salt = "+muo6gAmjvvyy5doTdjyaA==";
// extract the bytes SEPARATELY
saltBytes = binaryDecode(base64Salt, "base64");
passBytes = charsetDecode(thePassword, theEncoding );
ArrayUtils = createObject("java", "org.apache.commons.lang.ArrayUtils");
separateBytes = ArrayUtils.addAll( saltBytes, passBytes );
// concatenate first, THEN extract the bytes
theSalt = charsetEncode( binaryDecode(base64Salt, "base64"), theEncoding );
concatenatedBytes = charsetDecode( theSalt & thePassword, theEncoding );
// these are the raw bytes BEFORE hashing
WriteOutput("separateBytes= "& arrayToList(separateBytes, "|") &"<br>");
WriteOutput("concatenatedBytes"& arrayToList(concatenatedBytes, "|") );
</cfscript>
Results:
separateBytes = -6|107|-88|-22|0|38|-114|-5|-14|-53|-105|104|77|-40|-14|104|68|0|84|0|33|0|64|0|49|0|50|0
concatenatedBytes = -6|107|-88|-22|0|38|-114|-5|-14|-53|-105|104|-3|-1|68|0|84|0|33|0|64|0|49|0|50|0
Most likely the password is not encrypted, it is hashed. Hashing is different from encrypting, because it is not reversible.
You would not use ColdFusion's encrypt() function for this, you would use its hash() function.
So the questions you'll need to answer to figure out how to hash the passwords in CF to be able to auth against the DNN users are:
What algorithm is DNN using to hash the passwords?
How is the salt being used with the password prior to hashing?
Is DNN iterating over the hash X number of times to improve security?
All of those questions must be answered to determine how CF must use the hash() function in combination with the salt and user-submitted passwords.
I'll make some assumptions to provide an answer.
If we assume that noiteration is being done and that the salt is simply being appended to the password prior to using SHA1 to hash the password, then you'd be able to reproduce the hash digest like this:
<cfset hashDigest = hash(FORM.usersubmittedPassword & saltFromDB, "SHA1") />
(Posting a new response to keep the "encrypted" process separate from "hashing")
For "encrypted" keys, the DNN side uses the standard algorithms ie DES, 3DES or AES - depending on your machineKey settings. But with a few differences you need to match in your CF code. Without knowing your actual settings, I will assume you are using the default 3DES for now.
Data To Encrypt
The encrypted value is a combination of the salt and password. But as with hashing, DNN uses UTF-16LE. Unfortunately, ColdFusion's Encrypt() function always assumes UTF-8, which will produce a very different result. So you need to use the EncryptBinary function instead.
// sample valus
plainPassword = "password12345";
base64Salt = "x7le6CBSEvsFeqklvLbMUw==";
hexDecryptKey = "303132333435363738393031323334353637383930313233";
// first extract the bytes of the salt and password
saltBytes = binaryDecode(base64Salt, "base64");
passBytes = charsetDecode(plainPassword, "UTF-16LE" );
// next combine the bytes. note, the returned arrays are immutable,
// so we cannot use the standard CF tricks to merge them
ArrayUtils = createObject("java", "org.apache.commons.lang.ArrayUtils");
dataBytes = ArrayUtils.addAll( saltBytes, passBytes );
Encryption Algorithm
With block ciphers, ColdFusion defaults to ECB mode. (See Strong Encryption in ColdFusion) Whereas .NET defaults to CBC mode, which requires an additional IV value. So you must adjust your CF code to match.
// convert DNN hex key to base64 for ColdFusion
base64Key = binaryEncode(binaryDecode( hexDecryptKey, "hex"), "base64");
// create an IV and intialize it with all zeroes
// block size: 16 => AES, 8=> DES or TripleDES
blockSize = 8;
iv = javacast("byte[]", listToArray(repeatString("0,", blocksize)));
// encrypt using CBC mode
bytes = encryptBinary(dataBytes, base64Key, "DESede/CBC/PKCS5Padding", iv);
// result: WBAnoV+7cLVI95LwVQhtysHb5/pjqVG35nP5Zdu7T/Cn94Sd8v1Vk9zpjQSFGSkv
WriteOutput("encrypted password="& binaryEncode( bytes, "base64" ));

SOAP-ERROR: Encoding: string ... is not a valid utf-8 string

Hi I have a web service built using the Zend Framework. One of the methods is intended to send details about an order. I ran into some encoding issue. One of the values being returned contains the following:
Jaime Torres Bodet #322-A Col. Lomas de Santa María
The webservice is returning the following fault:
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<SOAP-ENV:Fault>
<faultcode>SOAP-ENV:Server</faultcode>
<faultstring>SOAP-ERROR: Encoding: string 'Jaime Torres Bodet #322-A Col. Lomas de Santa Mar\xc3...' is not a valid utf-8 string</faultstring>
</SOAP-ENV:Fault>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
How should I go about this problem?
Thanks
Followup: Problem was due to a truncated string by the database. The field was set to VARCHAR(50) and it truncated exactly in the middle of the encoded value.
What about change the encoding settings:
SERVER:
$server = new SoapServer("some.wsdl", array('encoding'=>'ISO-8859-1')); // for 'windows-1252' too
CLIENT:
$server = new SoapClient("some.wsdl", array('encoding'=>'ISO-8859-1')); // for 'windows-1252' too
... then the conversion is done automatically to UTF-8, I had the similiar problem, so this helped me, so it is tested
Today I run into same problem - the code which caused that problem was:
$request->Text = substr($text, 0, 40);
changing substr to mb_substr seems to solve the issue:
$request->Test = mb_substr($text, 0, 40, 'utf8');
The problem is that í != i. Try to convert your string to UTF-8 before using in a request. It may look like that:
$string = iconv('windows-1252', 'UTF-8', $string);
See http://php.net/iconv
The answers above lead me to try:
// encode in UTF-8
$string = utf8_encode($string);
which also resolved the error for me.
Reference: utf8_encode()
I fixed a problem like this using mb_convert_encoding with array_walk_recursive to walk into my POST parameters, named $params (array).
Maybe this is useful for you:
array_walk_recursive($params,function (&$item){
$item = mb_convert_encoding($item, 'UTF-8');
});
I found out that in my case not the encoding of strings was the problem but that the file itself was not saved as UTF-8. Even explicit saving with UTF-8 encoding did not help.
For me it worked to insert a comment with an UTF-8 character like
// Å