I am making a springboot application which will be act like a client for corda layer.
For doing this I have to put my contracts jar and workflows jar in my client application.
Is it safe from safety aspects or this approach is correct?
Any another alternative without giving contracts and workflows jar.
When you build the clients module, it will fat-jar all of the dependencies (including your CorDapps) into the jar file of your SpringBoot app (i.e. webserver).
It's safe, because if you give some party the webserver jar file and they extract your CorDapp jar files from it; they won't be able to transact with your nodes if they're not part of your network.
And if they are part of your network, then it shouldn't be a problem that they received that jar file; because most likely they already have the CorDapps that are part of it.
Maybe they are part of your network, but they should have only a certain sub-set of your CorDapps (i.e. they should be able to run only certain flows); then your workflows and contracts should have some validations inside of them that restrict certain actions to certain parties.
For example, you can create a custom configuration file (see here how); and inside that file you specify the entity that can be the issuer of your tokens. Then inside your flow you fetch that value from the file and compare getOurIdentity() with the fetched value, if they mismtach then you throw an exception.
Another example, you can put a list of parties as black-listed entities in a jar or zip file then upload that file to your node. Inside your flow you can add that file as an attachment to your transaction, then inside your contract, you can fetch that file and read its contents to find out if one of the participants is on the black-listed entities list or not. See example here.
Related
With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.
I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.
I am trying to understand how data from multiple sources and systems gets into HDF? I want to push web server log files form 30+ systems. These logs are sitting on 18 different servers.
Thx
Veer
You can create a map-reduce job. The input for your mapper would be a file sitting on a server, and your reducer would deduct to which path to put the file in hdfs. You can either aggregate all of your files in your reducer, or simply write the file as is at the given path.
You can use Oozie to schedule the job, or you can run it sporadically by submitting the map-reduce job on the server which hosts the job tracker service.
You could also create an java application that uses the hdfs api. The FileSystem object can be used to do standard file system operation, like writing a file to a given path.
Either way, you need to request the creation through hdfs api, because the name node is responsible for splitting the file in blocks and writing it on distributed servers.
Is it possible to create a data source inside an application (as a resource) and then use that same data source in another application within the same organization
This is not possible. Reason for this is, WSO2 Cloud (App Cloud to be precise) having an application level resource isolation. So, if you create a data source as a resource in one application, it is not visible to another application even within the same organization.
One thing you can do is, create another data source in your next application for the same database.
My client is requesting to be notified any time one of their business processes fails for any reason. I had the idea of writing a seperate application that will run as an "observer" and check for various parts of the process.
An example would be that a daily file was generated and uploaded to an FTP location. The "Observer" might have the following "tests" :
Connect to the FTP
Go to folder where file should exist
Find file with naming convention
Verify create date of file
Failure of any step will send an alert email and also log to a report (both in case database is down OR email is down).
My question is.... Are there any products out there that do something close to this? I'd rather buy if there is something robust out there. If not, this almost seems like a unit test platform... Anything out there for testing I could potentially repurpose?
As an FYI, we are a Microsoft/Windows based shop.
Thx in advance!
You could even use a Continuous Integration framework for this. They normally monitor source code repositories and build&test things, but could be used for this as well.
For instance, Hudson, Jenkins and CrouseControl.NET are a few open source ones that are good and can easily be set up for something like this. Only change the monitoring of a repository to either filesystem over FTP and write a small script which checks what you need. Everything else comes for free by the framework, i.e. email, web interface for monitoring and running things.
Just an idea.