Safety of storing mathematically manipulated password - password-encryption

I want to find a safe way to authenticate an user without saving the password
in order to do that I was thinking of transforming the password and storing the transformed variation within the class user.
Is it a safe way of doing so? if not what is the reason ?
Let's say our average Joe is creating an account
his username is "joe" and password "YEe3t"
I will use the ASCII table to simplify the example
YEe3t = (89, 69, 101, 51, 116)
my transformation function would do something similar to :
(1*89 mod 97) + (2*69) + (3*101 mod 97) + (4*51) + (5*116 mod 97) = 538
since YEe3t is 5 character long it would be stored as 5538.
When logging in if the length match + the transformation of the input in the password field thru the same function match the login will be successful

I really like how you came up with hash functions intuitively. Great work!
However, please don't try to come up with an implementation by yourself - you're welcome to read our FAQ for developers on the best ways to hash/store passwords.

Related

Use list to generate lowercase letter and number in python

I am trying to use a list such as:
[(1,4),(2,2)]
To get:
[(a,4),(b,2)]
I am trying to use 'string.ascii_lowercase' how would I accomplish this in Python3, this is for a coding challenge to get the least characters possible.
Thanks for any help!
I'm not going to solve it for you, but I'd suggest you look into the chr and ord functions. Note that the ASCII code for "a" is 97, so to convert 1 to "a" you would have to add 96 to it.
I would suggest you to follow the suggestion given by B.Eckles above as you would learn better and probably find a shorter (character-wise) solution.
However, if you want to stick with using string.ascii_lowercase, the code snippet below could be useful to start from:
import string
a = [(1,4),(2,2)]
b = []
for (first, second) in a:
b.append(
(string.ascii_lowercase[(first-1) % len(string.ascii_lowercase)],
second))
print b
In this case, the printed solution would be:
[('a', 4), ('b', 2)]
I have inserted the module (i.e. % len(string.ascii_lowercase)) to avoid out-of-bound accesses. Just be careful that the value 0 would produce 'z' in this way.
Hope it helps!

Compare duplicates for 4 fields in open SQL

I want to compare if there are duplicates across 4 fields in open SQL.
Scenario: User has 4 fields to input. First name (N1), last name (N2), additional first name (N3) and additional last name (N4).
Right now the algorithm works this way: It concatenates N1 + N2 + % and then also
N2+ N1 + %. So if the user inputs in any of the fields, the query looks for N1N2% or N2N1%. This mean for 2 fields, there are 2! combinations possible. Now with 2 additional fields, this algorithm explodes as there will be 4! combinations to check. Any ideas how to tackle this?
Note: We do this kind of combination check because the user could input data in any of those given input field. So we check for all combination of fields. Unfortunately, this cannot be changed.
EDIT:
I cannot assume the order as it was previously designed in such a way. Hence, the complications with combinations.
Edit2:
I like the idea of checking individual parts. But what we want to do is ideally concatenate all strings together and check for a substring in DB. In open-sql its done using the like statement. Our DB table has such concatenated string already stored for N1+N2 combination. This needs to be extended for 4 fields now.
The key to your problem is checking all name parts individually with leading and trailing '%' and check the total size of the db entry against the sum of the name parts:
field = ('%' + N1 + '%') AND field = ('%' + N2 + '%') AND field = ('%' + N3 + '%') AND field = ('%' + N4 + '%') AND LENGTH(field) = LENGTH(N1+N2+N3+N4)
This will find a match. You could use it to SELECT a normalized concatenation of the names and use GROUP BY and HAVING count(*)>1 to search for duplicates.
If the user does not care about the order and you want to check for duplicates then the following condition seems to meet your criteria I think.
SELECT ...
FROM ...
INTO TABLE ...
WHERE N1 IN (#INPUT_N1, #INPUT_N2, #INPUT_N3, #INPUT_N4)
AND N2 IN (#INPUT_N1, #INPUT_N2, #INPUT_N3, #INPUT_N4)
AND N3 IN (#INPUT_N1, #INPUT_N2, #INPUT_N3, #INPUT_N4)
AND N4 IN (#INPUT_N1, #INPUT_N2, #INPUT_N3, #INPUT_N4).
IF sy-dbcnt > 0.
"duplicates found, do something...
ENDIF.
Of course when there is garbage in the database where for example all the four fields are the same, then this will not return a real duplicate.

Validate phone number with Symfony

I want to validate the phone nummer in a form. I would like to check so number and the "(" and ")" char are valid only. So user can fill in +31(0)600000000. The +31 is already preset in the form. The number only is possible with the code below, only how to add the two chars?
Or is there a standaard better way to validate phone number?
#Assert\Length(min = 8, max = 20, minMessage = "min_lenght", maxMessage = "max_lenght")
#Assert\Regex(pattern="/^[0-9]*$/", message="number_only")
If you need a good and robust validator for numbers, with advanced options to valudate, I will advice to use google lib https://github.com/googlei18n/libphonenumber, there is existed symfony2 bundle https://github.com/misd-service-development/phone-number-bundle and you can see there is a assert annotation:
use Misd\PhoneNumberBundle\Validator\Constraints\PhoneNumber as AssertPhoneNumber;
/**
* #AssertPhoneNumber
*/
private $phoneNumber;
The regex you need is:
/^\(0\)[0-9]*$
or for the entire number
/^\+31\(0\)[0-9]*$
You can test and play around with your regex here (it also includes auto-generated explanations):
https://www.regex101.com/r/gD0hE5/1

Best way to compare phone numbers using Regex

I have two databases that store phone numbers. The first one stores them with a country code in the format 15555555555 (a US number), and the other can store them in many different formats (ex. (555) 555-5555, 5555555555, 555-555-5555, 555-5555, etc.). When a phone number unsubscribes in one database, I need to unsubscribe all references to it in the other database.
What is the best way to find all instances of phone numbers in the second database that match the number in the first database? I'm using the entity framework. My code right now looks like this:
using (FusionEntities db = new FusionEntities())
{
var communications = db.Communications.Where(x => x.ValueType == 105);
foreach (var com in communications)
{
string sRegexCompare = Regex.Replace(com.Value, "[^0-9]", "");
if (sMobileNumber.Contains(sRegexCompare) && sRegexCompare.Length > 6)
{
var contact = db.Contacts.Where(x => x.ContactID == com.ContactID).FirstOrDefault();
contact.SMSOptOutDate = DateTime.Now;
}
}
}
Right now, my comparison checks to see if the first database contains at least 7 digits from the second database after all non-numeric characters are removed.
Ideally, I want to be able to apply the regex formatting to the point in the code where I get the data from the database. Initially I tried this, but I can't use replace in a LINQ query:
var communications = db.Communications.Where(x => x.ValueType == 105 && sMobileNumber.Contains(Regex.Replace(x.Value, "[^0-9]", "")));
Comparing phone numbers is a bit beyond the capability of regex by design. As you've discovered there are many ways to represent a phone number with and without things like area codes and formatting. Regex is for pattern matching so as you've found using the regex to strip out all formatting and then comparing strings is doable but putting logic into regex which is not what it's for.
I would suggest the first and biggest thing to do is sort out the representation of phone numbers. Since you have database access you might want to look at creating a new field or table to represent a phone number object. Then put your comparison logic in the model.
Yes it's more work but it keeps the code more understandable going forward and helps cleanup crap data.

Counting unique login using Map Reduce

Let say I have a very big log file with this kind of format( based on where a user login )
UserId1 , New York
UserId1 , New Jersey
UserId2 , Oklahoma
UserId3 , Washington DC
....
userId999999999, London
Note that UserId1 logged in New York first and then he flied to New Jersey and logged again from there.
If I need to get how many unique user login (means 2 login will same userid considered as 1 login), how should I map and reduce it?
My initial plan is that I want to map it first to this kind of format :
UserId1, 1
UserId1, 1
UserId2, 1
UserId3, 1
And then reduce it to
UserId1, 2
UserId2, 1
UserId3, 1
But would this cause the output to be still big in number (Especially if common behaviour of user is to login 1 or 2 times a day ). Or is there a better way to implement this?
Do map-reduce.
For example, you have 10000 lines of data, but you can only process 1000 lines of data in a time.
Then, process 1000 lines of data for 10 times.
If the sum of lines of the 10 processing's result > 1000:
do the above step again.
else:
use set directly.
I recommend making use of a custom key in the map phase. You can refer the tutorial here for writing and using custom keys. The custom key should have two parts 1) userid 2)placeid. So essentially in the mapper phase you are doing this.
emit(<userid, place>, 1)
In the reduce phase, you just have to access the key and emit the two parts of the key separately.