Corda: Can an Admin see the transactions going in a Corda network? - blockchain

I went through complete architecture of Corda and I understood that Corda works on need to know basis.
But, the confusion here is:
If user wants to report for dispute or some problem related with transactions, where he should go. As each user is a separate process and maintaining its own database, is there any admin which can see the complete transaction and can take some action in a dispute or a problem.

thanks for the question.
There is no built-in concept of an administrator or centralized node as this would cause anything we build to be highly centralized (which defeats the purpose of a distributed application)
However, it is possible to include an administrator (regulator, governing body, etc.) as an additional party to all transactions within the CorDapp. This would be done explicitly, in code, by adding the administrator party (node) as a signer on all Commands or as a non-signing party to the transactions. The administrator node would receive information regarding all transactions on the network.

Related

Google Cloud Platform: Mining cryptocurrencies

I received an email indicating that my Google Cloud Project have been suspended because I was supposedly mining cryptocurrencies.
My project is a tool like a Calculator and that issue surely isn't possible.
What could be happen?
In order to create a function I hired a programmer on UpWork and give him access to the GCP.
Well, it seems this developer has abused our trust and did something wrong.
What can I do?
Now the project is suspended and any section I try to go in the form "Appeal" appears.
I appealed but I have to wait Google to reply.
How can I check if my project have been used for these bad usages?
I want to cut services the developer could be used or so.
Unfortunately, you must wait for Google’s reply.
AS a recommendation you could review this information to determine if it is intended, Cryptocurrency mining is often an indication of the use of fraudulent accounts and payment instruments, and requires verification in order to mine cryptocurrency in the Cloud Security Help Center.
If you believe your project has been compromised, I recommend that you secure all your instances, which may require uninstalling and then reinstalling your project, you could follow the steps.
To better protect your organization from misconfiguration and access the best of Google's threat detection, you may consider enabling Security Command Center (SCC) for your organization. To learn more about SCC visit.

Hyperledger network approach

Taking the following service description:
X is a platform matching buyers and sellers.
Buyers can join the platform by creating a buyer account and browse seller shops, buy, manage their account, ..., on the Buyers client application.
Sellers can join the platform by creating a seller account and manage their shops and orders, ..., on the Buyers client application.
I am still confuse about the right approach to adopt.
Here I represented the organization X (the platform). I assume that a buyer is not considered as an organization but rather a user of X. So every time a buyer create an account, I register a user under X, save email and password on an external database and link this entry to a user in X's wallet.
A seller can be considered as an organization (at least to me but happy to debate on that). So every time a seller create an account, I have to create an add a new organization to the existing network. They will however share the same "Seller application", also using a email/password approach.
In most of the sample under the Hyperledger Fabric repo, there is like 3-4 organizations at the start of the network and it is quite painful to add one more to an existing network. In my case, I could end up with 1 million organization or an infinite if the service is a success. Can this scale?
Is it the correct approach for this kind of use case? Any feedback or resource related to this use case is welcome.
This doesn't look like a valid use of hyper-ledger fabric. The blockchain is optimized to store transactional information. It isn't a regular DB, if you try, for instance, to store "user profiles" you will have a hard time trying so. For instance, each member for the blockchain network (again, hyper-ledger fabric) is meant to keep a copy of the ledger. Thus, everyone would get access to all user profiles. You can play around with PDC (private data), or as you mention, having virtually infinite users created on a single organization, but that isn't really how it's supposed to be used..
So, again, hyper-ledger fabric is meant to store transactional information (ledger relates to transaction). I think whatever strategy you try to implement for your use case, you should keep buyer/seller profiles/information off chain, and use the ledger only for transactional information that members of the network can see. In this scenario Fabric would server as an audit trail system, adding trust to each operation between buyers/sellers.

Microservices Architecture: Cross Service data sharing

Consider the following micro services for an online store project:
Users Service keeps account data about the store's users (including first name, last name, email address, etc')
Purchase Service keeps track of details about user's purchases.
Each service provides a UI for viewing and managing it's relevant entities.
The Purchase Service index page lists purchases. Each purchase item should have the following fields:
id, full name of purchasing user, purchased item title and price.
Furthermore, as part of the index page, I'd like to have a search box to let the store manager search purchases by purchasing user name.
It is not clear to me how to get back data which the Purchase Service does not hold - for example: a user's full name.
The problem gets worse when trying to do more complicated things like search purchases by purchasing user name.
I figured that I can obviously solve this by syncing users between the two services by broadcasting some sort of event on user creation (and saving only the relevant user properties on the Purchase Service end). That's far from ideal in my perspective. How do you deal with this when you have millions of users? would you create millions of records in each service which consumes users data?
Another obvious option is exposing an API at the Users Service end which brings back user details based on given ids. That means that every page load in the Purchase Service, I'll have to make a call to the Users Service in order to get the right user names. Not ideal, but I can live with it.
What about implementing a purchase search based on user name? Well I can always expose another API endpoint at the Users Service end which receives the query term, perform a text search over user names in the Users Service, and then return all user details which match the criteria. At the Purchase Service, map the relevant ids back to the right names and show them in the page. This approach is not ideal either.
Am I missing something? Is there another approach for implementing the above? Maybe the fact that I'm facing this issue is sort of a code smell? would love to hear other solutions.
This seems to be a very common and central question when moving into microservices. I wish there was a good answer for that :-)
About the suggested pattern already mentioned here, I would use the term Data Denormalization rather than Polyglot Persistence, as it doesn't necessarily needs to be in different persistence technologies. The point is that each service handles its own data. And yes, you have data duplication and you usually need some kind of event bus to share data across services.
There's another option, which is a sort of a take on the first - making the search itself as a separate service.
So in your example, you have the User service for managing users. The Purchases services manages purchases. Each handles its own data and only the data it needs (so, for instance, the Purchases service doesn't really need the user name, only the ID). And you have a third service - the Search Service - that consumes data produced by other services, and creates a search "view" from the combined data.
It's totally fine to keep appropriate data in different databases, it's called Polyglot Persistence. Yes, you would like to keep user data and data about purchases separately and use message queue for sync. Millions of users seems fine to me, it's scalability, not design issue ;-)
In case of search - you probably want to search more than just username, right? So, if you use message queue to update data between services you can also easily route this data to ElasticSearch, for example. And from ElasticSearch perspective it doesn't really matter what field to index - username or product title.
I usually use both approaches. Sometimes i have another service which is sitting on top on x other services and combines the data. I don't really like this approach because it is causing dependencies and coupling between services. So in general, within my last projects we tried to stick to polyglot persistence.
Also think about, if you need to have x sub http requests for combining data in some kind of middleware service, it will lead you to higher latency. We always try to cut down the amount of requests for one task and handle everything what is possible through asynchronous queues. ( especially data sync )
If you conceptualize modules as the owners and controllers of the data they work on, then your model must also communicate that data out of that module to others. In contrast, the modules in a manufacturing process have the access to change data without possessing and controlling it.
Microservices is an architecture for distributed processing, like most code, where modules pass the data around to work on it. From classic articles by Harvard Business Review and McKinsey on the subject of owning members of a supply chain, I identified complexities arising from this model and wrote an article teaching programmers what you need to know: http://www.powersemantics.com/p.html
Manufacturing is an architecture for integrated processing, where modules work on the data without passing it around from point to point. This can be accomplished by having modules configured to access the same memory, files or database tables. My architecture shows how to accomplish this on memory via reference properties.
When you consider "exposing an API at the Users Service end which brings back user details based on given ids", you need to be aware that creates what HBR calls "irreversible" complexity, which I've dubbed centralization complexity. Don't build A->B (distributed) systems, because you can't decentralize them later after failing to separate requirements. Requirements in production processes represent user instructions, and centralized modules only enable you to change the wrong users' processes. In other words, centralized modules don't document user groups or distinguish them from derived-product-users.

Transcations, API's and Designing for Failure

I have been working on an application that needs to log credit card transactions which is dependent on using an external API. Within my application, I have the concept of an invoice with a total, and a transaction that when successful credit card payment is made, deducts from this total.
This is more of a platform independent question, but I am working with Django, Python and MySQL.
My question centers mainly around the use of transactions when dealing with external API's and how to design your software to handle potential failures. Both Django and MySQL support transactions, so that in itself is not an issue, but suppose the following scenario:
Credit card submitted though the payment API
Credit card is successfully processed
This response is then logged to the database as a payment on that invoice
There is an error saving the payment to the database for one reason or another
What do you do now?
If there was not an API call involved the answer would be clear, rollback the database transaction and raise an error. But having a call to an external API complicates matters, because this is not really a way to rollback on the external API call.
I am interested if anyone has run into this issue (for credit cards, or similar types of transactions) and how they addressed the problem, or in general some approaches for software design in this case.
It's hard to manage this in software. However, if your payment gateway is calling a callback to signify that the transaction is successful, it will presumably log an error if that callback fails to complete, and you should be able to configure it to alert you in that case, perhaps by email. Then it's up to you to rectify the situation manually.

Web Services Design Question - Logging messages

We had a debate in the office with respect to audit logging of messages received and sent via Web Services.
I am of the opinion that the entire SOAP message should not be logged in the application audit logs unless there is a requirement that states that this is required. Only salient elements of the request need to be part of the audit log as this provides evidence that is required in the audit trail.
My reasons are:
(1) Audit logs by definition are always turned on and should not be turned off. So if we take the decision of logging the entire message for audit trail they will be turned on always and can cause a huge performance impact during production runs (particularly during peak loads)
(2) If the business/technical requirement does not explicitly state this as a requirement this is an un-necessary overhead. If information is required, the run-time engines tracing capability can be used to turn on/off to get the SOAP messages.
What are the generic thoughts of experts in this space.
Thanks,
Manglu
Don't confuse auditing with logging. If there is a requirement for auditing then you need to perform auditing.
Since auditing is typically required for legal or policy reasons you need to understand what actions and activities need to be logged as well as what data needs to be logged. This is not a technical decision but needs to be determined by the business. Once you have your requirements then you can project your audit volumes and design your application to take these into account (e.g. performance, storage, etc.).
If you think you have an auditing requirement but it is not explicitly stated then ask for clarification. You don't want to find this out only after you have been sued.
If you truly have an auditing requirement then you should probably audit the entire soap request message as well as the response. This is to support non-repudiation.
As an example let's say that you have a health care application and only audit the key information: personal identifiers (e.g. SSN) and whether the patient is allergic to penicillin. But what happens when a patient dies because is allergic to penicillin was false when it shouldn't have been? The audit logs are checked and you say that you were sent a value of false for that patient but the other system says that they actually sent you a value of true and that you must have a problem with your system. In this scenario what you need to do is to show the exact message that was sent to the web service and that because it was signed by the service consumer you can prove that it came from them and also prove that the data in the message is correct. Then you would follow that information through your system via the audit logs.
Of course, it all goes back to the requirements; if the business finds that only auditing x and y satisfies whatever legislation or policies then go with that.
I know from experience that logging it all can lead to pretty huge files or a lot of data if kept on database. It's very helpful during development time, but in production it becomes a problem. I would suggest logging as you said. But be aware of a situation I came across: We were providing a webservice for 3rd-party companies use. When there's some dispute about who's fault is the error. We needed the exact SOAP message to prove that it wasn't our fault. I don't know if this scenario applies to you.