Hashed password sometimes longer than 128 characters - coldfusion

I'm having a weird security-related problem, lately I've been getting regular but intermittent errors when trying to insert hashed passwords in a SQL Server database field that's nvarchar(130):
<cfqueryparam value="#hashpass#" cfsqltype="cf_sql_char" maxLength="130">
The hashpass variable is set thus:
<cfset hashpass = Hash(arguments.password & getsalt.user_salt, "SHA-512")>
Wondering how it's possible for a SHA-512 hash to be longer than 128 characters, when the documentation says it should always be 128 exactly? Here's the ColdFusion 10 error:
[Macromedia][SQLServer JDBC Driver][SQLServer]String or binary data would be truncated.

It seems from your error that the issue is at a database level, as ColdFusion is not failing your maxlength check on the cfqueryparam tag and is allowing the query to be executed. I just tested trying to pass a string that exceeds the length specified in the maxlength attribute (on CF10) and get the error:
The cause of this output exception was that:
coldfusion.tagext.sql.QueryParamTag$InvalidDataException:
Invalid data value this-is-a-string-that-is-too-long exceeds maxlength setting 10..`
As Adam Cameron mentioned in the comments to the question, it seems likely that it is a different field in your query that is throwing the error.
As the hashed password will be 128 chars long - is there a reason why you are validating 130 chars?

Related

Max value validation for text field error in laravel

I'm trying to validate one of my text field in laravel.
The filed only should accepts numbers(0-9) and the minimum character size is 10. Maximum is 12.
This is my validation rule for the mentioned field,
'reload_pin'=>['required','numeric', 'min:10','max:12'],
But this gives me an error every time even when I enter a correct input saying
input value should not be greater than 12
What would be the correct validation and the fix?
You are trying to validate a Numeric value. Therefore it checks for Minimum and Maximum values
In order to validate your input properly you can try this
'reload_pin' => 'required|string|min:10|max:12|regex:/[0-9]{9}/'
You may refer the laravel documentation for more details Here

Sitecore logs full with WARN Datakey length bigger then 100 chars, and was trimmed

We are having some problems with the following line in the log files of Sitecore:
WARN Datakey length bigger then 100 chars, and was trimmed
It's not really a massive problem, but it is logging this up to 2000 times in one day.
If I have investigated further: probably it has got something to do with the DataKey column in the analytics database. I know there is a limitation to 100 characters. But if we could find out what code adds a datakey that is to large we can fix it.
Anyone got an idea?
Thanks
If you look in your Analytics database and then look at the PageEvents table you'll see the DataKey in this table.
Take a look at the values in this field - it will more than likely be an exception occurring. The data key will have the details of the exception.

DatabaseError at /post/113/ value too long for type character varying(10)

I am using postgresql on Heroku for my Django App. When I try making comment for my posts I sometimes get this error (again, sometimes not all the time).
Despite of the error, the comment is still saved, but all the code following the save() does not execute.
This problem only occurs on postgresql though. On my localhost where I am using sqlite, everything works just fine.
I am not sure what is the reason for this.
This is how my model looks like
class Comment(models.Model):
post = models.ForeignKey(post)
date = models.DateTimeField(auto_now_add=True)
comment = models.TextField()
comment_user = models.ForeignKey(User)
That is how my Comment model looks like.
So is it because I did not add max_length for comment?
Here is the Traceback
DatabaseError at /post/114/
value too long for type character varying(10)
Request Method: POST
Request URL: http://www.mysite.com/post/114/
Django Version: 1.4.1
Exception Type: DatabaseError
Exception Value:
value too long for type character varying(10)
Exception Location: /app/.heroku/venv/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py in execute, line 52
Python Executable: /app/.heroku/venv/bin/python2.7
Python Version: 2.7.2
Python Path:
['/app',
'/app/.heroku/venv/bin',
'/app/.heroku/venv/lib/python2.7/site-packages/pip-1.1-py2.7.egg',
'/app/.heroku/venv/lib/python2.7/site-packages/distribute-0.6.31-py2.7.egg',
'/app',
'/app/.heroku/venv/lib/python27.zip',
'/app/.heroku/venv/lib/python2.7',
'/app/.heroku/venv/lib/python2.7/plat-linux2',
'/app/.heroku/venv/lib/python2.7/lib-tk',
'/app/.heroku/venv/lib/python2.7/lib-old',
'/app/.heroku/venv/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/app/.heroku/venv/lib/python2.7/site-packages',
'/app/.heroku/venv/lib/python2.7/site-packages/PIL']
Server time: Wed, 5 Dec 2012 20:41:39 -0600
I can't help you with the Django parts (sorry) so I'll just speak PostgreSQL.
Somewhere in your application you have a varchar(10) column and you're trying to put something longer than 10 characters into it, you probably have a missing validation somewhere. SQLite ignores the size in a varchar(n) column and treats it as text which has no size limit; so you can do things like this with SQLite:
sqlite> create table t (s varchar(5));
sqlite> insert into t (s) values ('Where is pancakes house?');
sqlite> select * from t;
s
where is pancakes house?
with nary a complaint. Similarly, SQLite lets you do ridiculous things like putting a string into a numeric column.
This is what you need to do:
Stop developing on SQLite when you're deploying on top of PostgreSQL. Install PostgreSQL locally and develop on top of that, you should even go so far as to install the same version of PostgreSQL that you'll be using at Heroku. There are all sorts of differences between databases that will cause you headaches, this little field size problem is just your gentle introduction to cross-database issues.
Stop using varchar(n) with PostgreSQL, just use text. There's no point to using size limited string columns in PostgreSQL unless you have a hard requirement that the size must be limited. From the fine manual:
The storage requirement for a short string (up to 126 bytes) is 1 byte plus the actual string, which includes the space padding in the case of character. Longer strings have 4 bytes of overhead instead of 1. Long strings are compressed by the system automatically, so the physical requirement on disk might be less. [...] If you desire to store long strings with no specific upper limit, use text...
Tip: There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.
So don't bother with traditional char and varchar in PostgreSQL unless you have to, just use text.
Start validating your incoming data so ensure that it doesn't violate any size or format constraints you have.
You can switch your columns from varchar(n) to text immediately and keep working with SQLite while you get PostgreSQL up and running; both databases will be happy with text for strings of unlimited length and this simple fix will get you past your immediate problem. Then, as soon as you can, switch your development environment to PostgreSQL so that you can catch problems like this before your code hits production.
The reason is that PostgreSQL actually checks the lengths of the data against the size of the field and errors out if too large, whereas SQLite completely ignores the specified field size, and MySQL silently truncates the data destroying it irretrievably. Make the field larger.

XSS attack - Sanitizing input vs Rejection

Lately, have become interested in XSS and its prevention methods.
Most of the XSS prevention techniques focus on sanitizing the input for invalid characters and using them. This raises a question:
When it is obvious that the purpose is indeed an XSS attack why are we trying to strip the invalid characters and then going ahead with using the input instead of directly rejecting the input as such and sending the usage to an error page?
Am sure that everyone would have thought of this approach but somehow focus is on input validation,filtering and reusing instead of rejection. Why? What am I missing here?
The rule I use is, I use input validation to make sure data is valid according to the domain. If I expect a number and get a string of letters, I reject it. If however I have a text field that can include anything (like a comment on stack overflow), input validation and rejection is virtually impossible. So now I need to sanitize/output encode.
because mostly when input is wrong you already show an error page
for instance
page.php?id=a33
"select * from table where id = ".((Int)$_GET['id']);
the "num rows" value will be 0 because you are searching for:
"select * from table where id = 0";
in plus in some cases any string, even if corrupted can be useful for what the program is going to do, for instance a search string...
and again, don't do nothing can be frustrating for the user. better if sanitize and show a warning if some data were lost

"out of memory" exception in CRecordset when selecting a LONGTEXT column from MySQL

I am using CODBCRecordset (a class found on CodeProject) to find a single record in a table with 39 columns. If no record is found then the call to CRecordset::Open is fine. If a record matches the conditions then I get an Out of Memory exception when CRecordset::Open is called. I am selecting all the columns in the query (if I change the query to select only one of the columns with the same where clause then no exception).
I assume this is because of some limitation in CRecordset, but I can't find anything telling me of any limitations. The table only has 39 columns.
Has anyone run into this problem? And if so, do you have a work around / solution?
This is a MFC project using Visual Studio 6.0 if it makes any difference.
Here's the query (formatted here so wold show up without a scrollbar):
SELECT `id`, `member_id`, `member_id_last_four`, `card_number`, `first_name`,
`mi`, `last_name`, `participant_title_id`, `category_id`, `gender`,
`date_of_birth`, `address_line_1`, `address_line_2`, `city`, `state`,
`zip`, `phone`, `work_phone`, `mobile_phone`, `fax`, `email`,
`emergency_name`, `emergency_phone`, `job_title`, `mail_code`,
`comments`, `contract_unit`, `contract_length`, `start_date`,
`end_date`, `head_of_household`, `parent_id`, `added_by`, `im_active`,
`ct_active`, `organization`, `allow_members`, `organization_category_id`,
`modified_date`
FROM `participants`
WHERE `member_id` = '27F7D0982978B470C5CF94B1B833CC93F997EE23'
Copying and pasting into my query browser gives me only one result.
More info:
Commented out each column in the select statement except for id. Ran the query and no exception.
Then I systematically went through and uncommented each column, one at a time, and re-ran query in between each uncomment.
When I uncomment the comment column then I get the error.
This is defined as the following (Using MySQL): LONGTEXT
Can we assume you mean you're calling CODBCRecordset::Open(), yes? Or more precisely, something like:
CDatabase db;
db.Open (NULL,FALSE,FALSE,"ODBC;",TRUE);
CODBCRecordSet rs (&db);
rs.Open ("select blah, blah, blah from ...");
EDIT after response:
There are some known bugs with various ODBC drivers that appear to be caused by retrieving invalid field lengths. See these links:
http://forums.microsoft.com/msdn/showpost.aspx?postid=2700779&siteid=1
https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=296391
This particular one seems to have been because CRecordset will allocate a buffer big enough to hold the field. As the column returns a length of zero, it's interpreted as the max 32-bit size (~2G) instead of the max 8-bit size (255 bytes). Needless to say, it can't allocate enough memory for the field.
Microsoft has acknowledged this as a problem, have a look at these for solutions:
http://support.microsoft.com/kb/q272951/
http://support.microsoft.com/kb/940895/
EDIT after question addenda:
So, given that your MySQL field is a LONGTEXT, it appears CRecordSet is trying to allocate the max possible size for it (2G). Do you really need 2 gig for a comments field? Typing at 80 wpm, 6cpw would take a typist a little over 7 years to fill that field, working 24 h/day with no rest :-).
It may be a useful exercise to have a look at all the columns in your database to see if they have appropriate data types. I'm not saying that you can't have a 2G column, just that you should be certain that it's necessary, especially in light of the fact that the current ODBC classes won't work with a field that big.
Read Pax's response. It gives a you a great understanding about why the problem happens.
Work Around:
This error will only happen if the field defined as (TEXT, LONGTEXT, etc) is NULL (and maybe empty). If there is data in the field then it will only allocate for the size the data in the field and not the max size (thereby causing the error).
So, if there is a case where you absolutely have to have these large fields. Here is a potential solution:
Give the field a default value in the database. (ie. '<blank>')
Then when displaying the value; you pass NULL/empty if you find default value.
Then when updating the value; you pass the default value if you find NULL/empty.
I second Pax's suggestion that this error is due to trying to allocate a buffer big enough to hold the biggest LONGTEXT possible. The client doesn't know how large the data is until it has fetched it.
LONGTEXT is indeed way larger than you would ever need in most applications. Consider using MEDIUMTEXT (max size 16MB) or just TEXT (max size 64KB) instead.
There are similar problems in PHP database interfaces. A PHP normally has a memory size limit and any fetch of a LONGBLOB or LONGTEXT is likely to exceed that limit.