Get account id from private/secret key - blockchain

I added a way to generate an address from PK58 by possessing the private key, but is there anyway to get the readable account id?
For example, input my private and/or secret key and get account.testnet if they have one, instead of a hex kind of address.
What are the options to get the account id?

There is no way to retrieve named account name from within the private/secret key.
However, there is a way to do it from the public one by using the public access of NEAR Explorer database.
You can find the shared access to the databases in NEAR Indexer for Explorer repo
After you have connected you execute the query:
SELECT account_id FROM access_keys WHERE public_key = 'ed25519:5HApjDQKtYQQhWURi2zQ8rRrVfJftkUDKLjyVejhLBwG';
account_id
-----------------------
py2sfxwe5q16p.testnet
Refer to the Database structure scheme

Related

Connect Redshift through Access Keys via SQuirrel SQL

I'm trying to follow this tutorial https://www.cdata.com/kb/tech/awsmanagement-jdbc-squirrel-sql.rst in order to connect to Redshift via SQuirrel SQL. In particular I'm trying to connect via Access Key ID and Secret Access Key.
When it comes to the Driver properties tab and need to insert the 2 keys, I struggle to set the 2 Values:
I try to click in the Value field but it simply doesn't allow me to insert any value.
Anyone had similar problem and resolved?
I found a workaround by using a Driver with SDK (for other versions: https://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html#jdbc-previous-versions-with-sdk) and specifying the IAM credentials in the connection URL, so structured:
jdbc:redshift:iam://{cluster-name}:{aws-region}/{db-name}?DbUser={username}&AccessKeyID={access-key-ID}&SecretAccessKey={secret-access-key}&AutoCreate=true
and replacing the fields in brackets (the final part &AutoCreate=true is optional and necessary only if it's the first time accessing with the {username} user and is desired to create it as a new user in the DB (for other fields refer to https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc-and-odbc-options-for-database-credentials.html).
I figured it out with the help of https://docs.aws.amazon.com/redshift/latest/mgmt/generating-iam-credentials-configure-jdbc-odbc.html at step 3, based on their example: jdbc:redshift:iam://examplecluster:us-west-2/dev?AccessKeyID=AKIAIOSFODNN7EXAMPLE&SecretAccessKey=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Deterministic encryption using AWS KMS

I need to build an identity service that uses a customer supplied key to encrypt sensitive ID values for storage in RDS but also has to allow us to look up a record later using the plaintext ID. We'd like to use a simple deterministic encryption algorithm for this but it looks like KMS API doesn't allow you to specify the IV so you can never get identical plaintext to encrypt to the same value twice.
We also have the requirement to look up the data using another non-secure value and retrieve the encrypted secure value and decrypt it - so one-way hashing is unfortunately not going to work.
Taken together, this means we won't be able to perform our lookup of the secure ID without brute force iterating through all records and decrypting them and comparing to the plaintext value, instead of simply encrypting the plaintext search value using a known IV and using that encrypted value as an index to look up the matching record in the database.
I'm guessing this is a pretty common requirement for things like SSN's and such so how do people solve for it?
Thanks in advance.
look up a record later using the plaintext ID
Then you are loosing quite a bit of security. Maybe you could store a hash (e. g. sha-256) of the ID along the encrypted data, which would make easier to lookup the record, but not revert the value
This approach assumes that the ID is from a reasonably large message space (there are potentially a lot of IDs) so it is not feasible to create a map for every possible value
KMS API doesn't allow you to specify the IV so you can never get identical plaintext to encrypt to the same value twice.
yes, KMS seems to provide its own IV for ciphertext enforcing good security practice
if I understand your use case correctly, your flow is like this:
The customer provides a key K and you use this key to encrypt a secret S, which is stored in RDS with an associated ID.
Given a non-secret key K, you want to be able to look up S and decrypt it.
If the customer is reusing the key, this is actually not all that hard to accomplish.
Create a KMS key for the customer.
Use this KMS key to encrypt the customer's IV and the key the customer has specified, and store them in Amazon Secrets Manager - preferably namespaced in some way by customer. A Json structure like this:
{
"iv": "somerandomivvalue",
"key": "somerandomkey"
}
would allow you to easily parse the values out. ASM also allows you to seamlessly perform key rotation - which is really nifty.
If you're paranoid, you could take a cryptographic hash of the customer name (or whatever) and namespace by that.
RDS now stores the numeric ID of the customer, the insecure values, and a namespace value (or some method of deriving the location) in ASM.
It goes without saying that you need to limit access to the secrets manager vault.
To employ the solution:
Customer issues request to read secure value.
Service accesses ASM and decrypts the secret for customer.
Service extracts IV and key
Service initialises cipher scheme with IV and key and decrypts customer data.
Benefits: You encrypt and decrypt the secret values in ASM with a KMS key under your full control, and you can store and recover whatever state you need to decrypt the customer values in a secure manner.
Others will probably have cryptographically better solutions, but this should do for a first attempt.
In the end we decided to continue to use KMS for the customer supplied key encrypt/decrypt of the sensitive ID column but also enabled the PostgreSQL pgcrypt extension to provide secure hashes for lookups. So in addition to our encrypted column we added an id_hash column and we operate on the table something like this:
`INSERT INTO employee VALUES ..., id_hash = ENCODE(HMAC('SENSITIVE_ID+SECRET_SALT', 'SECRET_PASSPHRASE', 'sha256'), 'hex');
SELECT FROM employee WHERE division_id = ??? AND id_hash = ENCODE(HMAC('SENSITIVE_ID+SECRET_SALT', 'SECRET_PASSPHRASE', 'sha256'), 'hex');`
We could have done the hashing client-side but since the algorithm is key to later lookups we liked the simplicity of having the DB do the hashing for us.
Hope this is of use to anyone else looking for a solution.

access credentials error in Copy Command in S3

I am facing access credentials error when i ran copy Command in S3.
my copy command is :
copy part from 's3://lntanbusamplebucket/load/part-csv.tbl'
credentials 'aws_access_key_id=D93vB$;yYq'
csv;
error message is:
error: Invalid credentials. Must be of the format: credentials 'aws_iam_role=...' or 'aws_access_key_id=...;aws_secret_access_key=...[;token=...]'
'aws_access_key_id=?;
aws_secret_access_key=?''
Could you please can any one explain what is aws_access_key_id and aws_secret_access_key ?
where we can see this?
Thanks in advance.
Mani
The access key you're using looks more like a secret key, they usually look something like "AKIAXXXXXXXXXXX".
Also, don't post them openly in StackOverflow questions. If someone gets a hold of a set of access keys, they can access your AWS environment.
Access Key & Secret Key are the most basic form of credentials / authentication used in AWS. One is useless without the other, so if you've lost one of the two, you'll need to regenerate a set of keys.
To do this, go into the AWS console, go to the IAM services (Identity and Access Management) and go into users. Here, select the user that you're currently using (probably yourself) and go to the Security Credentials tab.
Here, under Access keys, you can see which sets of keys are currently active for this user. You can only have 2 sets active at one time, so if there's already 2 sets present, delete one and create a new pair. You can download the new pair as a file called "credentials.csv" and this will contain your user, access key and secret key.

What can you do with AWS_ACCESS_KEY_ID without also having AWS_SECRET_ACCESS_KEY?

All the documentation about AWS keys seems to always tell you to have both the key id and the secret key. Are there any practical uses to have only the key id without the secret key? If not, why aren't the two combined into one ever so slightly more manageable single setting?
Seems to me if you must ask the user to produced the secret you might just as well ask for their own key id as well in the process.
More general: https://en.wikipedia.org/wiki/Public-key_cryptography
All Amazon APIs only work with the access key + signature. The signature is the way you prove you also have the secret key. The secret key never goes over the wire.
If you would "combine" them in the same key you would not know what account the request is for. You would also have to send the secret key over the wire which, in general, is a very bad thing.
So basically the public (access) key servers as an account selector and the private key serves to prove you actually have access to the account.

boto - What exactly is a key?

AS the title says, what is a key in boto?
What does it encapsulate (fields, data structures, methods etc.)?
How does one access the file contents for files in an AWS bucket using a key/boto?
I was not able to find this information on their official documentation or on any other third party website. Could anybody provide this info?
Here are some examples of the usage of the key object:
def download_file(key_name, storage):
key = bucket.get_key(key_name)
try:
storage.append(key.get_contents_as_string())
except:
print "Some error message."
and:
for key in keys_to_process:
pool.spawn_n(download_file, key.key, file_contents)
pool.waitall()
In your code example - key is the object reference to the unique identifier within a bucket.
Think of buckets as a table in a database
think of keys as the rows in the table
you reference the key (better known as an object) in the bucket.
often in boto (not boto3) works like this
from boto.s3.connection import S3Connection
connection = S3Connection() # assumes you have a .boto or boto.cfg setup
bucket = connection.get_bucket('my_bucket_name_here') # this is like the table name in SQL, select OBJECT form TABLENAME
key = bucket.get_key('my_key_name_here') this is the OBJECT in the above SQL example. key names are a string, and there is a convention that says if you put a '/' in the name, a viewer/tool should treat it like a path/folder for the user, e.g. my/object_name/is_this is really just a key inside the bucket, but most viewers will show a my folder, and an object_name folder, and then what looks like a file called is_this simply by UI convention
Since you appear to be talking about Simple Storage Service (S3), you'll find that information on Page 1 of the S3 documentation.
Each object is stored and retrieved using a unique developer-assigned key.
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. Because the combination of a bucket, key, and version ID uniquely identify each object, Amazon S3 can be thought of as a basic data map between "bucket + key + version" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL http://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and "2006-03-01/AmazonS3.wsdl" is the key.
http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html
The key is just a string -- the "path and filename" of the object in the bucket, without a leading /.