I am reading the documentation regarding stripe verification for managed accounts and I am wondering if it is a good idea to store them as well (as a backup) on a private place where my application has access (like a private bucket on S3 or in a private server)?
I'd recommend against saving any sensitive info on your end such as their SSN, a copy of their government id or their bank account details. The best solution here is to send the details to Stripe directly as they will store it on their end and not log any of it beyond tracking that you provided those info.
You then listen for account.updated events on your Connect webhook endpoint setup in your platform. This will tell you whether Stripe needs more info from that user if fields_needed is set and what delay you have to provide the required details based on verification[due_by].
You can also use properties like legal_entity[ssn_last_4_provided] to know if you've already sent that information to Stripe or if they might need it. This can be found in the docs here
Related
What i am trying to do:
Building a small app that allows a user to purchase a service for a set amount of tokens. For example, 100 tokens for service A, 500 tokens for service B. This will be for a custom token on the harmony blockchain.
What i know:
I already know how to connect to metamask and get the users address. Signer and provider are available to me.
What confuses me:
Examples and documentation all refer to a private_key and creating a wallet, i don't need to do that, i need to use the users existing wallet.
What i need to do:
Prompt a transaction in the user wallet (harmony one or metamask) for a set amount of tokens.
Check if the user has required balance (seems trivial knowing i can read their balance).
Make the transaction. Also seems ok after reading the docs.
Get a receipt, then call a callback/my code. Again, seems ok after reading the docs.
All pretty straight forward, but again - every document i read always refers to setting a private key - surely i don't need to do this?
A transaction always needs to be signed by a private key generating the sender address. Otherwise it's rejected by the network.
Examples and documentation all refer to a private_key and creating a wallet, i don't need to do that, i need to use the users existing wallet.
every document i read always refers to setting a private key - surely i don't need to do this?
A backend approach is to import the private key to the app and use it to sign the transaction.
However, there's also a frontend approach: Send a request to a wallet browser extension to sign the transaction and broadcast it to the network. The wallet extension then pops up a window and lets the user chose whether they want to sign the transaction (with their private key, not shared with the app) and broadcast it - or not.
You can find an example of such request in the MetaMask docs page.
An advantage of this approach is that your app doesn't need to ask for the user's private key. A disadvantage is that if the user haven't installed a browser wallet compatible with your app, they can't send the transaction (or at least not so easily).
Note: I'm not familiar with the Harmony wallet, but I'm assuming it works in a similar way as MetaMask - because Harmony is an EVM-compatible network, and MetaMask only supports EVM-compatible networks.
I'm using this endpoint /api/v1/Owners/{ownerAccountId}/BoundLocks to get boundlocks belonging to a certain account that earlier granted access to my application.
The issue is, every account has a different ownerAccountId, how can I get the one associated with a particular lock system?
I think what you are trying to archive is:
You are using the oAuth2 authenticate code flow. So users grant your services access to their tapkey owner account (locking system).
If this not the correct assumption, the answer might not be correct. Please then update your question with more details and we will provider a matching answer.
You can get all owner accounts of such a user with the GET /api/v1/Owners endpoint.
For more informations about this endpoint visit https://developers.tapkey.io/openapi/tapkey_management_api_v1/#/Owners
In general all public available endpoints are listed and described in details here:
https://developers.tapkey.io/openapi/tapkey_management_api_v1/
I'm currently trying to make an account signup page for a small project I'm working on and I don't know how to send data back to the server (I'm using the Flask framework) without also allowing everyone to send data. Let's say that I've set up an API endpoint on /createAccount. I can then send POST requests to that endpoint: {"username": "test", "password": "test"}. The web server will then handle that request by inserting that data into a database and responding with 201. The problem is, anybody would be able to send these requests, and I only want users to be able to register through the login page, and not by making an API call. Is there any way of doing this?
Edit: I've given this problem a bit more thought and I think that the only API that is difficult to secure is the signup API. When a user has created an account, I can just assign them an API key, which they will send to the API every time they want to make a request, which means that an account is required to make API calls. If a certain key is making too many requests, they can be rate limited or temporarily banned from making further requests. The problem with the signup API however, is that there is no information by witch a request sender could be identified. I could use the IP address, but that can be changed and wouldn't really help if multiple IPs are spamming the API at the same time. Is there a way I can identify non-registered users?
Short answer: no.
You have to check data to make sure the account being created is something legit and not trash data to fill your database or any other malicious intents.
This is the reason you usually have to confirm an account clicking on a confirmation link sent to your mail: this way the app is sure that your account is legit.
You could also check info on the front end, but that is never as secure as back end checking, because of your concern in the question: in the end, anyone who gets to know your endpoints could potentially send direct requests to your server with whatever data they wanted.
Assuming you have a trusted source of registrations, an if that source can make an ssh connection to the server where your Flask app is running, an alternative to trying to lock down a registration API is to provide a command line script to do the registration.
The trusted source does something like
ssh someuser#youripaddress /path/to/register.py "username" "password" "other info"
If you use a Flask custom command you can share model definitions db configuration.
Trying to understand how to use Cognito and API Gateway to secure an API.
Here is what I understand so far from AWS documentation and the Cognito user interface:
Clients
www-public - public facing website
www-admin - administrators website
Resource Servers
Prices - for this simple example the API will provide secured access to this resource.
Scopes
prices.read
prices.write
Again, very simple permissions on the API. Public www users can read prices, administrators can write them.
API Gateway
GET /prices - accessible to authenticated users that can read prices.
POST /prices - only accessible to administrators
Users
Administrators - can update prices via the POST method.
Non-administrators - cannot update prices.
Based on this...
Each client will request the scopes it is interested in. So for the public www site it will request prices.read and for the administration site both prices.read and prices.write.
The API Gateway will use two Cognito Authorisers, one for each HTTP Verb. So the GET method must check the user can read prices and the POST method that they can write prices.
The bit I don't see is how to put all of this together. I can make the clients request scopes but how do they now connect to user permissions?
When the token is generated, where is the functionality that says "Ok, you requested these scopes, now I'm going to check if this user has this permission and give you the right token?"
I understand that scopes ultimately related to the claims that will be returned in the token.For example, requesting the profile scope means that the token will contain certain claims e.g. email, surname etc.
I think based on this that my permissions will ultimately end up being claims that are returned when specific scopes are asked for. The fact that the two clients differ in what they request means that the prices write claim an never be returned to the public www client. It would never issue a token if the prices.write claim was requested.
What I can't see is where this fits in Cognito. There is the option to put users into groups but that is pretty much it. Likewise, there is nothing (that I could see) to relate scopes to claims.
I'm coming from a .Net and Identity Server background. Certainly in the last version of Identity Server I looked at there was a handler method where you would work out which claims to put into a token. I guess this would map into one of the custom handler lambda functions in Cognito. From there this would need to query Cognito and work out what claims to issue?
The final piece of the puzzle is how the API Gateway checks the claims. Can this be done in API Gateway or does the token need to be inspected in the Lambda function I will write to handle the API Gateway request?
Certainly using Identity Server and .Net there was a client library you would use in the API to inspect the claims and redact permissions accordingly. Guessing there is something similar in a Node JS Lambda function?
A few assumptions there as I'm basically in the dark. I think the basics are there but not sure how to connect everything together.
Hoping someone has figured this out.
I have a data processing web service that accepts a google spreadsheet as input. A spreadsheet owner enables my data service to read the spreadsheet by sharing the sheet with the service email. This works well and was surprisingly easy to setup.
But the service email is not a valid email address and generates a DNS error in the users mailbox. The service also does not receive a notification that a spreadsheet has been shared.
Is there a way to associate a valid public email address with my Google project that would allow it to receive the sharing notification sent by sharing the spreadsheet? This would ideally also be the email address that the spreadsheet owner used to share the sheet with the service.
This is currently not implemented. Service accounts have email addresses like example#[project-id].iam.gserviceaccount.com but these are currently only identifiers and don't have mailboxes (in fact their domains don't resolve to an IP address and there is no MX record). This might eventually change but currently it is not possible to receive mail for service accounts.
A possible workaround is to use 3-legged OAuth to get access to your users' complete drive data (instead of just that one document). This might not be a viable option since giving an application access to the drive scope is a very serious commitment.
There is one workaround that I want to mention since people might think of it but it's a bad idea: you could create a Google consumer (GMail) account and get a 3-legged OAuth token for it using the GMail scope (to receive email) and the Drive scope (to access documents shared with it). I strongly recommend against using something like this in a production environment since consumer accounts are not built for service account scenarios (e.g. mailbox size, email reception QPS before abuse protection is triggered, credential exchanges, ...). A solution like this will eventually blow up.