In Node-Red, I'm using some Amazon Web Services nodes (from module node-red-node-aws), and I would like to read some configuration settings from a file (e.g. the access key ID & the secret key for the S3 nodes), but I can't find a way to set everything up dynamically, as this configuration has to be made in a config node, which can't be used in a flow.
Is there a way to do this in Node-Red?
Thanks!
Unless a node implementation specifically allows for dynamic configuration, this is not something that Node-RED does generically.
One approach I have seen is to have a flow update itself using the admin REST API into the runtime - see https://nodered.org/docs/api/admin/methods/post/flows/
That requires you to first GET the current flow configuration, modify the flow definition with the desired values and then post it back.
That approach is not suitable in all cases; the config node still only has a single active configuration.
Another approach, if the configuration is statically held in a file, is to insert them into your flow configuration before starting Node-RED - ie, have a place-holding config node configuration in the flow that you insert the credentials into.
Finally, you can use environment variables: if you set the configuration node's property to be something like $(MY_AWS_CREDS), then the runtime will substitute that environment variable on start-up.
You can update your package.json start script to start Node-RED with your desired credentials as environment variables:
"scripts": {
"start": "AWS_SECRET_ACCESS_KEY=<SECRET_KEY> AWS_ACCESS_KEY_ID=<KEY_ID> ./node_modules/.bin/node-red -s ./settings.js"
}
This worked perfect for me when using the node-red-contrib-aws-dynamodbnode. Just leave the credentials in the node blank and they get picked up from your environment variables.
Related
I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.
I have created a DynamoDb table in my dev machine and I'm trying to insert couple of rows from my .NET Core application using the CreateBatchWrite<T> method of DynamoDBContext object. I'm able to query the table from DynamoDB Javascript Shell window from the localhost:8000/shell url and it returns row count as 0. But when trying to call the CreateBatchWrite<T> method I get the error, "Entity doesn't exist in AsyncLocal".
Explanation
When using X-Ray, this happens when there is an attempt to create a SubSegment without a Parent Segment. Depending on your setup, when you run a query it might try creating a SubSegment, but it's failing because there is no parent segment.
This is common when running a Lambda function locally, as the Mock Lambda Test Tool will not create a Segment for you like the actual Lambda environment does on AWS. This can happen in other scenarios too.
More details here: https://github.com/aws/aws-xray-sdk-dotnet/issues/125
Solution
Easiest way to solve this is disabling X-Ray locally (as you probably don't want to generate traces locally):
In appsettings.Development.json add this:
"XRay": {
"DisableXRayTracing": "true",
"UseRuntimeErrors": "false",
"CollectSqlQueries": "false"
}
The important bit is the DisableXRayTracing equals true.
Make sure your appsettings.Development.json is set to Copy Always in the properties window. You can do this by including this in your .csproj:
<ItemGroup>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="appsettings.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
If you really want to trace things locally, then make sure you create
a parent segment only when running locally (on AWS this would cause
problems as you would have two parent segments, one created manually
by you, another one created by AWS).
Add this line before any DynamoDB API methods are executed:
AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
You can find more info in GitHub discussion https://github.com/aws/aws-xray-sdk-dotnet/issues/69#issuecomment-482688754
Also, you will need to import these 2 packages.
using Amazon.XRay.Recorder.Core;
using Amazon.XRay.Recorder.Core.Strategies;
If you are tracing requests made with the AWS SDK, the X-Ray SDK attempts to generate a subsegment automatically to represent those requests, such as CreateBatchWrite. However, a subsegment can only be created as the child of an existing Segment, so if you have not created a segment beforehand that Entity doesn't exist error will occur.
See these docs for how to create custom segments. Alternatively, if you are developing a web app, the X-Ray SDK can automatically create segments for requests made to your service by adding configuration described in these docs
Hi can I get some help with setting environment specific configuration.
I have two files for datasource
server/datasources.json
server/datasources.test.json
I use the script "SET NODE_ENV=test && mocha test/**/*.test.js" on WIndows to run my test cases and set the node environment to test.
Loopback does not load server/datasource.test.json instead the datasource from server/datasource.json is loaded.
I have confirmend the environment using process.env.NODE_ENV which logs "test
I have tried to change server/datasource.json to server/datasource.local.json, But then I get an error
WARNING: Main config file "datasources.json" is missing.
I dont understand what I am doing wrong.Am I supposed to create all the config files for the test environment like *.test.json.
Or is there a different config file where I have to define envrionment specific files.
Please check this repo https://github.com/dhruv004/sample-loopback-example
From the code If you run npm run test It loads data from local.json which is the data source for development environment.It should load data from test.json(datasource for test environment)
Looking on your repository, I can see this note from LoopBack documentation particulary relevant for you:
A LoopBack application can load multiple configuration files, that can potentially conflict with each other. The value set by the file with the highest priority will always take effect. The priorities are:
Environment-specific configuration, based on the value of NODE_ENV; for example, server/config.staging.json.
Local configuration file; for example, server/config.local.json.
Default configuration file; for example, server/config.json.
In your model-config.json all models have datasource set to db so in your case LoopBack application loads first datasources.test.json. It cannot find datasource db there (only testdb), so it falls back to datasources.json. There it finds datasource db and it uses it. Try renaming testdb in datasources.test.json to db and it will take a precedense.
I have setup the cluster for WSO2-IS (2 instances on different machines) based on the information provided here - https://docs.wso2.com/display/CLUSTER44x/WSO2+Clustering+and+Deployment+Guide
Setup DB with a user store, shared registry, 2 local registries
Copied the DB driver jar to component lib
Updated the master-datasource.xml
Updated the registry.xml (made sure the master is read-only false and worker is read-only true)
Updated the AXIS2.xml and used WKA for membership scheme
Performed other changes as suggested in the link
Started the master with -Dsetup option and the worker without -Dsetup option.
Verified that the governance folder is shown as a symlink
I can see the interaction between both the nodes, there are Hazelcast messages related to node joining when the worker is started.
User created in 1 is able to login to the other instance, service provider are also automatically available when viewed through UI.
The problem is that when I create a secondary user store (JDBC) in the first node and goto the list in the second node - the secondary user store is not present and I cannot view the users in the user list too.
Am I missing something or is it the way the cluster is supposed to perform i.e. secondary user stores have to be shared in some other way?
Thanks,
Vikas
Secondary user store configurations are not synced between two nodes by default. Once you create a secondary user store from UI, it will create a file in following location.
[WSO2_IS]/repository/deployment/server/userstores/
These configuration file need to copy by manually or have to use some synchronization mechanism to copy file to other node. since this is not a frequent task better to copy this file.
Fore more information
https://docs.wso2.com/display/IS500/Configuring+Secondary+User+Stores
I have a WSO2 Goverance Registry setup conformant to this blog post http://blog.shelan.org/2013/02/application-governance-with-wso2-greg.html.
When defining a new application in the WSO2 GR using the menu: Metadata > Add > Application I would like to be able to directly add the actual application artifact (war/car file).
The selected file should then by placed in the SVN location conforming to the initial state of the lifecycle to which I will bind the application. This of course implies that I would also need to be able to directly add the lifecycle when defining a new application.
The new application form would then be something like this:
Name: ExampleApplication-1.0.0
Type: .war (is now redundant)
Description: My Example Application Artifact: Selected file
ExampleApplication-1.0.0.war Lifecyle: MyDTAP-Lifecycle_v1
Does anybody know a good starting point for adding this functionality in terms of code hooks or extension points?
If I have understood you correctly, what you need to do is basically provide an file upload option in your "Application" RXT (Governance Artifact Configuration) which will upload what ever your file type and based on that you want to fill the derivable information to the meta data of the artifact. And also to attach a selected/pre defined life cycle to it at artifact creation. What you are looking for is Registry Handlers [1]. You can achieve all aforementioned tasks probably through a single handler.
[1] - http://docs.wso2.org/wiki/display/Governance453/Handlers