I am digging into ipfs field in order to start a NFT project so that I am working with frontend with ipfs-http-client npm package (npm i ipfs-http-client).
My simple test code is like below:
const { create } = require('ipfs-http-client')
const ipfsClient = create("https://ipfs.infura.io:5001/api/v0")
const cid = await ipfsClient.add(urlSource("https://camo.githubusercontent.com/e92540c54c9b47f684b0e4dd5442ebe20ddbbe2e9699c29ce8400c055fa46e6a/68747470733a2f2f697066732e696f2f697066732f516d65364b4a644b637038355459624c78754c56376f517a4d694c72656d4437484d6f584c5a456d676f36526e682f6a732d697066732d737469636b65722e706e67"))
//https://ipfs.io/ipfs/QmUQeyhy7yY9yZUXKbKLCnPAoGKCeuhH3XxzprcJfYiz1h
So far so good without any problem and the data is accesable on ipfs network. The question of mine is very conceptual and fundamental I believe.
Apparently, the image uploaded to ipfs is stored in the ipfs nodes. And the image will be accessible as long as there is at least one node holding the image data. or we will need to pin it. Otherwise, the data will be no longer seen.
And I found the article from the following link saying that the add() execution with default pin with infura API. I am not sure if it is correct because I event does not provide any infura API secret key for this operation. Or is it a free service?
How to pin a hash for IPFS through Infura's gateway using ipfs-http-client API
Related
I am not sure what I don't know and if this is possible. I think this would be a similar issue for Ethereum, so that is the reason why I marked Ethereum on tags.
I am going to describe on example what I want to achieve:
There is a token called "Elonomics".
https://bscscan.com/address/0xd3ecc6a4ce1a9faec1aa5e30b55f8a1a4b84f938
there is owner with address "0x3a78ea5c462f0afa76fa091a70a7bcd020b274d6"
there are all txs from owner address: https://bscscan.com/txs?a=0x3a78ea5c462f0afa76fa091a70a7bcd020b274d6
when I take one of the transaction from the owner e.g. 0x6f81f2dbd285d772c6b34151b676f6749ef75ac9a6c76b5e4dfa844a0c6932d2"
I can read the logs from this transaction in:
https://bscscan.com/tx/0x6f81f2dbd285d772c6b34151b676f6749ef75ac9a6c76b5e4dfa844a0c6932d2#eventlog
so I can read that somebody set "totalSupply :1500000800000"
and now are my questions:
Is it possible to fetch all txs related to this specific owner address with these logs in json data (or any other data that can be updated dynamically on dAPPs)?
Are data from txs logs are encrypted? (if yes what is the format of this and how to decrypt this how bscscan do)
Is it possible to fetch these data directly from blockchain instead of using 3rd part application like bscscan?
Is it possible to fetch all txs related to this specific owner address with these logs in json data (or any other data that can be updated dynamically on dAPPs)?
Yes, because all this data is stored on a blockchain.
Are data from txs logs are encrypted? (if yes what is the format of this and how to decrypt this how bscscan do)
All data on a public blockchain is public.
Is it possible to fetch these data directly from blockchain instead of using 3rd part application like bscscan?
Run your own BSC node. Please see web3.py library how to interact with an Ethereum based blockchain, like one that is BSC node.
How can we upload data from a webpage to an IPFS server? In IPFS, I have already uploaded a .txt/JSON empty file.
The file can either be a .txt or a JSON file.
If i understand your question correctly,you are trying to upload something from your webpage to IPFS cluster(at least this is the basic context i understand).
So with this intent i would say this is possible.
What you need to do is
a. Upload the file in a local IPFS node using the IPFS api(Read the official documentation) and getting the content identifier(cid) from your webpage.
Problem:
What happens when the local node is not available?(You shutdown your computer simply!!)
The file yo upload will be garbage collected after it is not used for some time as a result you lose the data completely.
Solution:
Use a Pinning service.Pinning is the mechanism that allows you to tell IPFS to always keep a given object somewhere where it is always accessible and save it from the internal Garbage collection that the system performs.
The default pinning location is your local computer node and you can do that using the IPFS specified API.
You can also use other pinning service available like Pinnata where you can host the content in their node itself(You scale the number of nodes and you pay!)
In this way you can guarantee that the content is always served and online.
Hope this helps you to get started with.
For my little project I need help and if it's possible.
The project is about signing documents using blockchain and IPFS. I try to create a DApp with following features:
Signer has to LogIn
After LogIn has been successful you can upload a document.
You can sign uploaded document.
DocumentHash is generated. DocumentHash should be stored on Ethereum Blockchain. Signed document is stored on IPFS.
Now I am trying to write my smart contract. The signature should be created as a object. So a signature is made of Name and actual time. This means signature should be created out of the information of the LogIn-process (first Name, last Name, SignerID (is unique, like password).
Is this possible with a Smart Contract? I don't know what to do so I don't know how to create this Signature within a Smart Contract and put the signature to the document. Then I know what to do with hashing the whole document and push it to IPFS...
Thank you!
The good news is: All your problems are already solved. The bad news (for you): Without blockchain.
I'm also not smart which this is smart in any way, but typically that's the way you want to go:
Take a hash over all the documents you want to sign
Looking at the public key cryptography (https://en.wikipedia.org/wiki/Public-key_cryptography), sign the hash with your private key. Signing the hash will proof the authenticy later.
Put the hash in any blockchain you want
By the way: There are a lot of Certificate Authorities that would also sign your hash from 2) without any high engery-consuming smart and inefficient blockchain stuff. Just saying.
I want to introduce key rotation to my system but for that reencryption is needed. It would be nice to do it reactively on some event, trigger etc., but I can't find anything like that at google documentation.
After a rotate event, I want to reencrypt data with the new key and destroy the old one.
Any ideas, how to achieve this goal?
As of right now, the best that you can do is write something that polls GetCryptoKey on regular intervals, checks to see if the primary version has changed, and then decrypts and reencrypts if it has.
We definitely understand the desire for eventing based on key lifecycle changes, and we've been thinking about the best way to accomplish that in the future. We don't have any plans to share yet, though.
When you rotate an encryption key (or when you enable scheduled rotation on a key), Cloud KMS does not automatically delete the old key version material. You can still decrypt data previously encrypted with the old key unless you manually disable/destroy that key version. You can read more about this in detail in the Cloud KMS Key rotation documentation.
While you may have business requirements, it's not a Cloud KMS requirement that you re-encrypt old data with the new key version material.
New data will be encrypted with the new key
Old data will be decrypted with the old key
At the time of this writing, Cloud KMS does not publish an event when a key is rotated. If you have a business requirement to re-encrypt all existing data with the new key, you could do one of the following:
Use Cloud Scheduler
Write a Cloud Function connected to Cloud Scheduler that invokes on a periodic basis. For example, if your keys rotate every 72 hours, you could schedule the cloud function to run every 24 hours. Happy to provide some sample code if that would help, but the OP didn't specifically ask for code.
Long-poll
Write a long-running function that polls the KMS API to check if the Primary crypto key has changed, and trigger your re-encryption when change is detected.
I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting