I have a problem where I'm trying to implement a smart contract with 'future maintainability' software design in mind. Suppose there is a contract which represents a person personal.sol We have another contract called record.col This record at the start only requires to be handled by the police department if they want to send some data to the personal.sol and change it's state. Later this record.sol needs to be modified for it be used by the hospital hospital.sol
There will be inheritance and abstract method required and right now, I don't exactly know how to do it. The following code should further clarify what I'm trying to achieve.
Person.sol
contract Person
{
Record[] records
strut Record{
string name;
uint time;
}
function updateRecords(string _name, uint _time){
Record _record = Record({name:_name,time:_time});
records.push(_record);
}
}
Record.sol
contract Record{
contract Person {
struct Record{} // can this work? Object properties are defined in Person
function updateRecords(string _name, uint _time){};
}
function commit(address _personaddr, string _name, uint _time){
_personaddr.transfer(address(this).balance;
Person person = Person.at(_personaddr); // creates an instance of the Person contract located at _personaddr address
person.updateRecords(_name,_time);
}
function abstractMethod() {}
// an abstract method which must be defined in contracts extending this
}
}
Police.sol
contract Police is Record {
//inherits updateRecords & abstract abstractMethod
function policeNewMethod(address _personaddr, string _name, uint _time){
// does something neww
commit(address _personaddr, string _name, uint _time);
}
function abstractMethod(){
//police own implementation of the method
}
}
Hospital.sol
contract Hospital is Record {
//inherits updateRecords & abstract abstractMethod
function HospitalNewMethod{
// does something
commit(address _personaddr, string _name, uint _time);
}
function abstractMethod(){
//police own implementation of the method
}
}
I don't want contracts extending Record.sol to interact directly with Person.sol's updateRecords() method. Instead, a check should be implemented to verify the contract calling updateRecords() is indeed an extention of solidity file Record.sol Is there a way to check this type like it is in Java instanceOf or .getClass
Sorry for the bad formatting but I wrote it all in an editor. It doesn't translate the indentation smoothly
The short answer is no at least in plain solidity you may be able to do it using assembly but I wouldn't know about that, you could put a function in Record.sol such as
function isRecord() public pure returns(bool){
return true;
}
And call that from the Person contract to verify that the calling contract does contain the record class. Although this could open you up to security vulnerabilities depending on what you will be doing with these contracts.
Related
Im new to Solidity, I got a problem that when I call purchaseCard function, it always returns the error The called function should be payable if you send value and the value you send should be less than your current balance. on Remix IDE.
I think the problem is at the complexity of my structs or I probably called variables not allowed. But I still cant find fix after many days researching.
These are my structs.
struct Card {
string name;
uint price;
}
mapping(uint => Card) public cards;
uint public numberOfTypes;
struct Purchase {
Card card;
uint purchaseDate;
}
struct User {
uint numberOfCards;
Purchase[] purchase;
bool exist;
}
mapping(address => User) public users;
And these are my functions, I will make it short.
// function addCard(string memory _name, uint _price) public isOwner;
// function removeCard(uint _id) public isOwner;
// function checkExistedUser(address _userAddress) public view returns(bool);
function purchaseCard(uint _id) public {
User storage user = users[msg.sender];
if (!checkExistedUser(msg.sender)) {
user.exist = true;
user.numberOfCards = 0;
}
Purchase storage purchase = user.purchase[user.numberOfCards];
purchase.card = cards[_id];
purchase.purchaseDate = block.timestamp;
user.numberOfCards++;
}
// function expiredCard(uint _id) public;
// function showRemainingDate(uint _id) public view returns(uint);
// function showPurchasedCards() public view returns (Purchase[] memory);
This is my full code: https://pastebin.com/4HXDZZVK
Thank you very much, I hope to learn more things.
In this case, you cannot create an empty 'space' for purchase array in storage before fill it because the type of this operation is an array and not a mapping!
To solve this issue, you can use the push() method (available only for storage array) to fill purchase() array into Purchase struct.
Following your logic, you must change purchaseCard() method in this way:
function purchaseCard(uint _id) public {
User storage user = users[msg.sender];
if (!checkExistedUser(msg.sender)) {
user.exist = true;
user.numberOfCards = 0;
}
user.purchase.push(Purchase(cards[_id], block.timestamp));
user.numberOfCards++;
}
In Solidity, If you have to send/receive ETHs then the payable function should be used.
Try this:
function purchaseCard(uint _id) public payable {
or add payable to your constructor.
pls refer to this: https://solidity-by-example.org/payable/
And btw pls next time share the complete code snippet.
I can't able to understand the entire code and flow.. but while debugging..
the revert occurs in the belowline of the purchasecard function
Purchase storage purchase = user.purchase[user.numberOfCards];
It seems like you are accessing the user mapping which does not contain any User Struct..
I think you forgot to assing the value to the user mapping in the addCard Function.
I may be wrong and I hope it helps..
I am implementing a factory pattern contract and on my constructor I want to trigger a function which deposits into the contract
--Factory--
function newChild() {
Child child = new Child(potentialPayoutAmount);
children[_childId] = address(child);
}
function getChild()view public returns (address) {
return children[_campaignId];
}
--Child Contract--
address payable public spender;
constructor(uint256 _potentialPayoutAmount) {
autoDeposit(_potentialPayoutAmount)
}
function autoDeposit(uint256 _potentialPayoutAmount) {
depositedFor[receiver] = depositedFor[receiver] + _potentialPayoutamount;
}
So my problem is that I want the spender address in the child contract to be the account that is spending the money, even though the autoDeposit function is obviously being triggered from the factory contract.
Any suggestions would help. Happy to elaborate if I was unclear.
Thanks!
Is it possible for the same contract to handle multiple ERC721 tokens? And if so, how?
Let’s say you’re making a game and you need to track two types of ERC721 tokens: unique Weapons and unique Vehicles.
You could have Structs for them, as follows:
struct Weapon {
uint256 IDNumber;
string type; // Sword, Bow&Arrow, Gun, Rifle, etc.
string specialFeature;
}
struct Vehicle {
uint256 IDNumber;
string type; // Car, Airplane, Boat, Spaceship, etc.
uint256 damageFactor;
}
To track their ownership, you could double the arrays managing them - for example instead of having the standard:
// Enumerable mapping from token ids to their owners
EnumerableMap.UintToAddressMap private _tokenOwners;
You would do:
// Enumerable mapping from token ids to their owners
EnumerableMap.UintToAddressMap private _weaponTokenOwners;
EnumerableMap.UintToAddressMap private _vehicleTtokenOwners;
(This may not be the most elegant way, but it's a start.)
The real question though is: how would you handle mandatory functions that are part of the ERC721 standard, such as balanceOf() and ownerOf()?
To be specific, are you allowed to say add an additional argument to these methods’ signatures to help indicate which particular Token you’re querying about?
For example, instead of this:
function balanceOf(address owner) public view override returns (uint256) {
}
You’d add a tokenName argument to the function’s signature, as follows:
function balanceOf(address owner, string tokenName) public view override returns (uint256) {
if(tokenName == “weapon”) {
return ownerAndHisWeaponTokensDictionary[owner].length;
}
else if(tokenName == “vehicle”) {
return ownerAndHisVehicleTokensDictionary[owner].length;
}
}
And you’d do something similar for ownerOf()?
Is this allowable?
And is this even the right approach tackling this - or is there a different way to reason about all of this and approach it differently?
My approach would be to define 3 separate contracts on 3 separate addresses:
address 0x123123 as the Weapons ERC-721 token contract
address 0x456456 as the Vehicles ERC-721 token contract
address 0x789789 as the actual game contract
In the game contract, you can then call the NFTs contracts to get or validate values:
function attack(uint attackerWeaponId) {
require(weaponsContract.isOwnerOf(msg.sender, attackerWeaponId));
// ...
}
The isOwnerOf() function takes 2 arguments, address owner and uint256 weaponId. Also, a user can probably own more weapons so that's why I'm showing the validation.
And the weapons contract balanceOf(address) would reflect the total amount of the Weapon NFTs that the user has.
mapping (address => Weapon[]) userOwnedWeapons;
function balanceOf(address owner) external view returns (uint256) {
return userOwnedWeapons[msg.sender].length;
}
We have a problem with contract redeploying. Each time when some logic is changed during new contract version deployment we are loosing all contract related data (which are stored in arrays, mappings). Then we need to execute data load procedures in order to restore environment to desired state which is time consuming action. I tried to split contract to tow ones (AbcDataContract, AbcActionsContract) but faced with problem of accessing to the mappings : Error: Indexed expression has to be a type, mapping or array (is function (bytes32) view external returns (uint256))
Initial contract :
contract AbcContract {
EntityA[] public entities;
mapping (bytes32 => uint) public mapping1;
mapping (bytes32 => uint[]) public mapping2;
mapping (bytes32 => uint[]) public mapping3;
/* Events */
event Event1(uint id);
event Event2(uint id);
/* Structures */
struct EntityA {
string field1;
string field2;
bool field3;
uint field4;
Status field5;
}
enum Status {PROPOSED, VOTED, CONFIRMED}
function function1(...) returns (...)
function function2(...) returns (...)
function function3(...) returns (...)
function function4(...) returns (...)
function function5(...) returns (...)
}
Refactored contracts :
contract AbcDataContract {
EntityA[] public items;
mapping (bytes32 => uint) public mapping1;
mapping (bytes32 => uint[]) public mapping2;
mapping (bytes32 => uint[]) public mapping3;
/* Events */
event Event1(uint id);
event Event2(uint id);
/* Structures */
struct EntityA {
string field1;
string field2;
bool field3;
uint field4;
Status proposalStatus;
}
enum Status {PROPOSED, VOTED, CONFIRMED}
}
contract AbcActionsContract {
AbcDataContract abcDataContract;
/* constructor */
function AbcActionsContract(address _AbcDataContract) {
abcDataContract = AbcDataContract(_AbcDataContract);
}
/* accessing to the mapping like abcDataContract.mapping1[someId] will raise Solidity compile error */
function function1(...) returns (...)
/* accessing to the mapping like abcDataContract.mapping2[someId] will raise Solidity compile error */
function function2(...) returns (...)
/* accessing to the mapping like abcDataContract.mapping3[someId] will raise Solidity compile error */
function function3(...) returns (...)
function function4(...) returns (...)
function function5(...) returns (...)
}
We would like to implement approach like we have in DB development when logic changes in stored procedures/views/other not data objects usually does not affect data itself. What is the best design solution for this problem ?
The first part of your question is fairly easy. To access a public mapping in another contract, simply use ():
abcDataContract.mapping1(someId)
Of course, you can also provide your own access methods to AbcDataContract instead of using the public mapping as well. If you go down this path, I'd recommend going through an interface to access your data contract
As for the design part of your question, it looks like you're on the right track. Separating your data store into its own contract has huge benefits. Not only is it much easier to deploy since you don't have to worry about migrating your data, but it's also much cheaper to deploy the new contract.
That being said, there's a couple things I want to point out with the refactored version you posted.
It's hard to tell what you're planning on doing with Struct1. There's no reference to it in your pseudocode. You can't return structs from functions in Solidity unless they are internal calls (or you explicitly decompose the struct).
Similarly, you can't return strings between contracts. If you plan on using Struct1.field1/2 in AbcActionsContract, you'll need to convert them to bytes32.
You'll probably want to move your event definitions into your business logic contract.
Separating your data store from your business logic is a key component to upgrading contracts. Using interfaces and libraries help with this. There are several blog posts out there addressing this issue. I personally recommend starting with this one along with a follow up here.
Here is approximate design of contracts which should resolve issue with losing data due to deployment some changes in contract's business logic :
contract DomainObjectDataContract {
struct DomainObject {
string field1;
string field2;
bool field3;
uint field4;
Status field5;
}
enum Status {PROPOSED, VOTED, CONFIRMED}
//primitives
//getters/setters for primitives
/arrays
DomainObject[] public entities;
//getters(element by id)/setters(via push function)/counting functions
mapping (bytes32 => uint) public mapping1;
mapping (bytes32 => uint[]) public mapping2;
mapping (bytes32 => uint[]) public mapping3;
//getters(element by id/ids)/setters(depends from the mapping structure)/counting functions
}
contract DomainObjectActionsContract {
DomainObjectDataContract domainObjectDataContract;
/*constructor*/
function DomainObjectActionsContract(address _DomainObjectDataContract) {
domainObjectDataContract = DomainObjectDataContract(_DomainObjectDataContract);
}
/* functions which contain business logic and access/change data via domainObjectDataContract.* Redeploying of this contract will not affect data*/
function function1(...) returns (...)
function function2(...) returns (...)
function function3(...) returns (...)
function function4(...) returns (...)
function function5(...) returns (...)
}
One of the pending design issues is application pagination capabilities. Let's suppose we have following structure :
struct EntityA {
string lessThen32ByteString1;
string moreThen32ByteString1;
string lessThen32ByteString2;
string moreThen32ByteString3;
bool flag;
uint var1;
uint var2;
uint var3;
uint var4;
ProposalStatus proposalStatus;
}
// 100K entities
EntityA[] public items;
And we need to return subset of data based on offset and limit to our UI per one contract's function invocation. Due to different limits/errors of Solidity our current functions (with helper functions for string to byte32 conversion, splitting string to a few byte32 parts and so on) looks like this :
function getChunkOfPart1EntityADetails(uint filterAsUint, uint offset, uint limit) public constant
returns (bytes32[100] lessThen32ByteString1Arr, bytes32[100] moreThen32ByteString1PrefixArr, bytes32[100] moreThen32ByteString1SuffixArr) {
}
function getChunkOfPart2EntityADetails(uint filterAsUint, uint offset, uint limit) public constant
returns (bytes32[100] lessThen32ByteString2Arr, bytes32[100] moreThen32ByteString2PrefixArr, bytes32[100] moreThen32ByteString2SuffixArr) {
}
function getChunkOfPart3EntityADetails(uint filterAsUint, uint offset, uint limit) public constant
returns (bool[100] flagArr, uint[100] var1Arr, uint[100] var2Arr, uint[100] var3Arr, uint[100] var4Arr, ProposalStatus[100] proposalStatusArr,) {
}
Definitely they looks like awful from design perspective but we still have no better solution for pagination. I am not even saying that there is no any query language support, even basic filtering by some field require manual implementation.
I am very curious about the possibility of providing immutability for java beans (by beans here I mean classes with an empty constructor providing getters and setters for members). Clearly these classes are not immutable and where they are used to transport values from the data layer this seems like a real problem.
One approach to this problem has been mentioned here in StackOverflow called "Immutable object pattern in C#" where the object is frozen once fully built. I have an alternative approach and would really like to hear people's opinions on it.
The pattern involves two classes Immutable and Mutable where Mutable and Immutable both implement an interface which provides non-mutating bean methods.
For example
public interface DateBean {
public Date getDate();
public DateBean getImmutableInstance();
public DateBean getMutableInstance();
}
public class ImmutableDate implements DateBean {
private Date date;
ImmutableDate(Date date) {
this.date = new Date(date.getTime());
}
public Date getDate() {
return new Date(date.getTime());
}
public DateBean getImmutableInstance() {
return this;
}
public DateBean getMutableInstance() {
MutableDate dateBean = new MutableDate();
dateBean.setDate(getDate());
return dateBean;
}
}
public class MutableDate implements DateBean {
private Date date;
public Date getDate() {
return date;
}
public void setDate(Date date) {
this.date = date;
}
public DateBean getImmutableInstance() {
return new ImmutableDate(this.date);
}
public DateBean getMutableInstance() {
MutableDate dateBean = new MutableDate();
dateBean.setDate(getDate());
return dateBean;
}
}
This approach allows the bean to be constructed using reflection (by the usual conventions) and also allows us to convert to an immutable variant at the nearest opportunity. Unfortunately there is clearly a large amount of boilerplate per bean.
I am very interested to hear other people's approach to this issue. (My apologies for not providing a good question, which can be answered rather than discussed :)
Some comments (not necessarily problems):
The Date class is itself mutable so you are correctly copying it to protect immutability, but personally I prefer to convert to long in the constructor and return a new Date(longValue) in the getter.
Both your getWhateverInstance() methods return DateBean which will necessitate casting, it might be an idea to change the interface to return the specific type instead.
Having said all that I would be inclined to just have two classes one mutable and one immutable, sharing a common (i.e. get only) interface if appropriate. If you think there will be a lot of conversion back and forth then add a copy constructor to both classes.
I prefer immutable classes to declare fields as final to make the compiler enforce immutability as well.
e.g.
public interface DateBean {
public Date getDate();
}
public class ImmutableDate implements DateBean {
private final long date;
ImmutableDate(long date) {
this.date = date;
}
ImmutableDate(Date date) {
this(date.getTime());
}
ImmutableDate(DateBean bean) {
this(bean.getDate());
}
public Date getDate() {
return new Date(date);
}
}
public class MutableDate implements DateBean {
private long date;
MutableDate() {}
MutableDate(long date) {
this.date = date;
}
MutableDate(Date date) {
this(date.getTime());
}
MutableDate(DateBean bean) {
this(bean.getDate());
}
public Date getDate() {
return new Date(date);
}
public void setDate(Date date) {
this.date = date.getTime();
}
}
I think I'd use the delegation pattern - make an ImmutableDate class with a single DateBean member that must be specified in the constructor:
public class ImmutableDate implements DateBean
{
private DateBean delegate;
public ImmutableDate(DateBean d)
{
this.delegate = d;
}
public Date getDate()
{
return delegate.getDate();
}
}
If ever I need to force immutability on a DateBean d, I just new ImmutableDate(d) on it. I could have been smart and made sure I didn't delegate the delegate, but you get the idea. That avoids the issue of a client trying to cast it into something mutable. This is much like the JDK does with Collections.unmodifiableMap() etc. (in those cases, however, the mutation functions still have to be implemented, and are coded to throw a runtime exception. Much easier if you have a base interface without the mutators).
Yet again it is tedious boilerplate code but it is the sort of thing that a good IDE like Eclipse can auto-generate for you with just a few mouse clicks.
If it's the sort of thing you end up doing to a lot of domain objects, you might want to consider using dynamic proxies or maybe even AOP. It would be relatively easy then to build a proxy for any object, delegating all the get methods, and trapping or ignoring the set methods as appropriate.
I use interfaces and casting to control the mutability of beans. I don't see a good reason to complicate my domain objects with methods like getImmutableInstance() and getMutableInstance().
Why not just make use of inheritance and abstraction? e.g.
public interface User{
long getId();
String getName();
int getAge();
}
public interface MutableUser extends User{
void setName(String name);
void setAge(int age);
}
Here's what the client of the code will be doing:
public void validateUser(User user){
if(user.getName() == null) ...
}
public void updateUserAge(MutableUser user, int age){
user.setAge(age);
}
Does it answer your question?
yc