How to get contract deployer address using ethers.js? - blockchain

I wanna know the way to get contract deployer address after contract deployed.
let contract = await factory.deploy
(name, name + "NFT");
await contract.deployed();
let deployer = contract.deployTransaction.from;
I can get deployer using deploy contract.
But I need to get in backend.
Looking forward to hearing from you.
Thank you :)

Related

Get user's wallet address in web3.js

I need to call an API that requires the user's wallet address. However, I can only get wallet's accounts by web3.eth.getAccounts() but not the wallet address.
I am using WalletConnect and was able to create web3 instance.
Thanks!
You need to use requestAccounts and ask the permission from the user to access their wallet addresses.
Due to privacy reasons, the website cannot do this by default.
Also, you need to set up your Web3 properly with the wallet in the order it to work. Because your question did not contain any example code or repeatable example, it is not possible to tell if you are doing it properly.

Deploying Solidity Smart Contract via Angular Frontend

We have created an Admin to access all the Contract Methods, but now we also need one-click deployment from the Admin for different business scenarios. We need a remix Ide kind of interface in our Admin where an admin user can paste their smart contract and can deploy via the frontend itself using Metamask. Is there any way to achieve this?
I understand the backend would also be needed for Compilation and Bytecode generation that's not an issue but it should work without asking for the private key. From deployer.
It should work like this:
Login to Admin portal.
Go to Add Contract and paste the Contract.
Some validations/compilation from backend.
After clicking Deploy it should ask Metamask to deploy rather than deploying from backend with private key.
You can compile a contract on frontend as well, using the solc NPM package.
Note that the GitHub repo is called solc-js but the NPM package is just solc. There's another NPM package called solc-js which seems to be abandoned.
Contract deployment is nothing else than sending a transaction with specific params. Specifically, the to field is omitted, and the data field is the compiled bytecode.
You can request MetaMask to sign a transaction:
const params: [{
from: '0x<userAddress>',
data: '0x<contractBytecode>',
}];
const txHash = await ethereum.request({
method: 'eth_sendTransaction',
params,
})
To get their address, you need them first to connect MetaMask to your app.

How to deal with "Only the owner of the contract can update the smart contract"

I am developing a smart contract that holds user information. The problem is that when I try to update the user information from the address that did not deployed the contract, the request is successful but there is no change in the user information. But when I change the user information from the address that deployed the contract it changes the user's information for all the users. Like the change is reflected on all the accounts regardless of the account being different.
function setUserName(string memory _userName) public {
users[msg.sender].userName = _userName;
}
This is my function that updates the user info. I believe that "msg.sender" is the one that is calling the contract and not the one that deployed the contract. I am using metamask and ganache for accounts. The first account is added to metamask in chrome browser and the second account is added to metamask in mozilla firefox.
Steps to reproduce username bug:
Open two different browsers
Start ganache server
In browser one add first address's private key to the metamask in
order to add ETH
In browser two add second address's private key to the metamask in
order to add ETH
Deploy the contracts with " truffle migrate --reset "
It will migrate the contracts with the first address in ganache
From the browser two try to update the username. You will se that
the update was successful but the username was not update (even
after refresh).
Now from browser one update the username by going to the settings
page. You will see that the username is updated and this change is
also reflected on the browser two, regardless of the address being
changed from the browser one's.
Same happens when we try to obtain points by exchanging tokens. From browser one the request is successful but from browser two it throws an error indicating that "ERC20: transfer amount exceeds balance". Even the user has token in their account.
The problem was the data was being saved correctly but when fetching the records solidity was assigning msg.sender to the creator of the contract not the one who sent the transaction. So, in order to deal with this I am sending the user address from frontend in the call function and receiving the address as a parameter in the respective functions. So instead of using the msg.sender I am using the address that I am receiving from frontend.

3-legged OAuth and one-time-code flow using google-auth-library-ruby with google-api-ruby-client

Quick Overview: I have a ruby app that runs nightly and does something with a user's google calendar. The user has already given access via a separate react app. I'm having trouble getting the ruby app to access the user's calendar with the authorization code from the react app.
Details: I have a React front-end that can sign in a user using gapi and subsequently sign the user into Firebase. Here is how I configure the gapi obj:
this.auth2 = await loadAuth2WithProps({
apiKey: config.apiKey, // from firebase
clientId: config.clientId, // from gcp
// ....
access_type: "offline", // so we get get authorization code
})
Here is sign in:
doSignInWithGoogle = async () => {
const googleUser = await this.auth2.signIn();
const token = googleUser.getAuthResponse().id_token;
const credential = app.auth.GoogleAuthProvider.credential(token);
return this.auth.signInWithCredential(credential);
};
The user's next step is to grant the app offline access to their calendar:
doConnectGoogleCalendar = async () => {
const params = {scope:scopes};
const result = await this.auth2.grantOfflineAccess(params);
console.log(result.code); // logs: "4/ygFsjdK....."
};
At this point the front end has the authorization code that can be passed to a server-side application to be exchanged for access and refresh tokens. I haven't been able to find a good way to use a user supplied auth-code to make calls to available scopes. This is how I've configured the oauth client:
auth_client = Google::APIClient::ClientSecrets.load(
File.join(Rails.root,'config','client_secrets.json') // downloaded from GCP
).to_authorization
^ I'm using the same GCP Credentials on the backend that I'm using for the frontend. It is a "OAuth 2.0 Client ID" type of credential. I'm unsure if this is good practice or not. Also, do I need to define the same config that I do on the frontend (like access_type and scope)?.
Next I do what the docs say to get the access and refresh tokens(click Ruby):
auth_client.code = authorization_code_from_frontend
auth_client.fetch_access_token!
---------
Signet::AuthorizationError (Authorization failed. Server message:)
{
"error": "invalid_grant",
"error_description": "Bad Request"
}
Is there something I'm missing in setting up a separate backend application that can handle offline access to a user granted scope? There is so much different information on these libraries but I haven't been able to distill it down to something that works.
UPDATE
I found this page describing the "one-time-code flow" which I haven't found anywhere else is all of the docs I've gone through. It does answer one of my minor questions above: Yes, you can use the same client secrets as the web application for the backend. (see the full example at the bottom where they do just that). I'll explore it more and see if my bigger problem can be resolved. Also going to update the title to include one-time-code flow.
After a good amount of digging through code samples and source code, I have a clean working solution. Once I found the page in my "update" it led me to finding out that ClientSecrets way I was doing things had been deprecated in favor of the google-auth-library-ruby project. I'm glad I found it because it seems to be a more complete solution as it handles all of the token management for you. Here is code to setup everything:
def authorizer
client_secrets_path = File.join(Rails.root,'config','client_secrets.json')
client_id = Google::Auth::ClientId.from_file(client_secrets_path)
scope = [Google::Apis::CalendarV3::AUTH_CALENDAR_READONLY]
redis = Redis.new(url: Rails.application.secrets.redis_url)
token_store = Google::Auth::Stores::RedisTokenStore.new(redis: redis)
Google::Auth::WebUserAuthorizer.new(client_id, scope, token_store, "postmessage")
end
and then this is how I use the authorization code:
def exchange_for_token(user_id,auth_code)
credentials_opts = {user_id:user_id,code:auth_code}
credentials = authorizer.get_and_store_credentials_from_code(credentials_opts)
end
after calling that method the library will store the exchanged tokens in Redis (you can configure where to store) for later use like this:
def run_job(user_id)
credentials = authorizer.get_credentials(user_id)
service = Google::Apis::CalendarV3::CalendarService.new
service.authorization = credentials
calendar_list = service.list_calendar_lists.items
# ... do more things ...
end
There is so much info out there that it is difficult to isolate what applies to each condition. Hopefully this helps anyone else that gets stuck with the "one-time-code flow" so they don't spend days banging their head on their desk.

SOAP Exception when trying to Access GP WebServices

I am trying to write a program that simply connects up to a GP WebService and invokes it's GetCustomerList() method to get customers from Great Plains. The code I am outlining below is an duplication of what I was able to find in the documentation, however, when I run it I am getting a SoapException. I am not sure if I am missing credentials (eg. username and password) or if I am even invoking this properly. I believe that I have the Security settings set correctly in the Dynamics Security Console, but I am not 100% sure or if there is anything else I need to configure. Any help would be greatly appreciated.
public IList<string> GetCustomerNames()
{
//Added a ServiceReference to the actual WebService which appears to be working
var service = new DynamicsGPClient();
var context = new Context();
var customerKey = new CustomerKey();
var companyKey = new CompanyKey();
//Trying to load the test data
companyKey.Id = (-1);
context.OrganizationKey = (OrganizationKey)companyKey;
//Trying to load the test data
customerKey.Id = "AARONFIT0001";
var customers = service.GetCustomerList(new CustomerCriteria(), context);
return customers.Select(x => x.Name).ToList();
}
Sounds like a security issue ... are you getting a specific error message ... that might be helpful ...
Also found this ...
It sounds to me like you need a Windows App Pool Identity on your
webservice. Right now you have the IIS Security set to "anonymous", so the
clients aren't required to pass the credentials when they call your methods.
You'll need to turn that off and run your app pool as a windows account. Once
you've got that working, you can choose if you want to just add that one App
Pool identity into the security for webservices, and have all the operations
done as that account (poor security), or, in your wrapper, you could use the
HTTP User's context identity, and set it to the "WorkOnBehalfOf" property for
the GP WebService call you're actually using.
You'll have to give the App
Pool identity "WorkOnBehalfOf" permission in the web service security
console, and then whichever users you want to call the webservice. This keeps
the security model intact, and you're authorizing that the users calling the
wrapped webservice actually do have permission to call the original GP
Webservice methods. (This is how Business Portal forwards requests to the
webservices.)
https://groups.google.com/forum/?fromgroups=#!topic/microsoft.public.greatplains/W7gAo_zXit8