I am trying to setup Active-Active setup for the WSO2 API Manager by following this url:Configuring an Active-Active Deployment
Everything is working fine except step 5 where I am trying to setup NFS. I moved /repository/deployment/server folder to another drive. For e.g. at location:
D:/WSO2AM/Deployment/server
so that both nodes can share deployment folder together.
Now not knowing what config files to change to point deployment folder to location other than default, I made changes to carbon.xml and made changes to an element "RepositoryLocation" and set it to D:/WSO2AM/Deployment/server but uit looks like it is not enough. When I start the server, I get the following error messsage:
FATAL - SynapseControllerFactory The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
[2019-03-12 15:54:49,332] FATAL - ServiceBusInitializer Couldn't initialize the ESB...
org.apache.synapse.SynapseException: The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
I will appreciate if someone can help me setup NFS so that both nodes can share same deployment folder and I don't have to worry about syncing them through some other mechanism.
Thanks
After struggling for almost a day, I found a solution in WSO2's completely separate thread.
Enable Artifact Synchronization
In this thread, they are asking to create a SMB share (for Windows) for Deployment and tenants directory, for APIM purpose, we need to create SMB share for the directory /repositiry/deployment/server directory.
It is just one command away to create a symbolic link as seen below:
mklink /D <APIM_HOME>/repositiry/deployment/server D:\WSO2\Shared\deployment\server
We need to create symlink in both nodes to point to the same location.
Once done, no configuration changes needed on APIM side. It will work by default and you have following scenario configured.
Related
I'm trying to create a simple mapping task job in Informatica Cloud that copies a text file from a subdirectory to its' parent directory. Even if I give both folders 777 permissions on the secure agent where the process is run, I get the following error when I run the process:
"[ERROR]
com.informatica.cloud.api.adapter.runtime.exception.FatalRuntimeException:
Actual File does not have execute permission!!"
How do I resolve this issue?
We found the issue. Salesforce automatically started enforcing "enhanced domains" in sandboxes even though our org isn't ready to use that feature yet. I learned from my client that this was only happening in our sandbox, and the issue started happening when this change was implemented. We temporarily disabled the feature in the Salesforce sandbox and will reactivate it once our third party vendor has our org ready to use enhanced domains.
This question assumes you have used Google Drive Sync or at least have knowledge of what files it creates in your cloud drive
While using rclone to sync a local ubuntu directory to a Google Drive (a.k.a. gdrive) location, I found that rclone wasn't able to (error googleapi: Error 500: Internal Error, internalError; the Google Cloud Platform API console revealed that the gdrive API call drive.files.create was failing)
By location I mean the root of the directory structure that the Google Drive Sync app creates on the cloud (eg. emboldened of say: Computers/laptopName/(syncedFolder1,syncedFolder2,...)). In the current case, the gdrive sync app (famously unavailable on Linux) was running from a separate Windows machine. It was in this location that rclone wasn't able to create a dir.
Forget rclone. Trying to manually create the folder in the web app also fails as follows.
Working...
Could not execute action
Why is this happening and how to achieve this - making a directory in the cloud region where gdrive sync has put all my synced folders?
Basically you can't. I found an explanation here
If I am correct in my suspicion, there are a few things you have to understand:
Even though you may be able to create folders inside the Computers isolated containers, doing so will immediately create that folder not only in your cloud, but on that computer/device. Any changes to anything inside the Computers container will automatically be synced to the device/computer the container is linked to- just like any change on the device/computer side is also synced to the cloud.
It is not possible to create anything at the "root" level of each container in the cloud. If that were permitted then the actual preferences set in Backup & Sync would have to be magically altered to add that folder to the preferences. Thus this is not done.
So while folders inside the synced folders may be created, no new modifications may be made in the "root" dir
I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.
I'm trying to create DCOS services that download artifacts(custom config files etc.) from hdfs. I was using simple ftp server before for it but I wanted to use hdfs. It is allowed to use "hdfs://" in artifact uri but it doesn't work correctly.
Artifact fetch ends with error because there's no "hadoop" command. Weird. I read that I need to provide own hadoop for it.
So I downloaded hadoop, set up necessary variables in /etc/profile. I can run "hadoop" without any problem when ssh'ing to node but service still ends with the same error.
It seems that environment variables configured in service are used after the artifact fetch because they don't work at all. Also, it looks like services completely ignore /etc/profile file.
So my question is: how do I set up everything so my service can fetch artifacts stored on hdfs?
The Mesos fetcher supports local Hadoop clients, please check your agent configuration and in particular your --hadoop_home setting.
In some of my application I have to manage environment specific attributes / variables like:
- folder path
- rest api urls
- credentials
- ...
At the moment I'm manually setting variables in the configuration registry of each server. This is quite heavy when you've to deploy a new server because you've to recreate everything manually (I haven't find a way to initialize the repository from an xml file for instance)
I've seen different approaches like
- writing different version of the endpoints, sequences,... and create different car for distribution on each environment
- Using local registry with different entries
- Using governance registry (I've no experience with this)
What is according to you the best approach for this?
Thanks for helping
You can find the best practices guide for WSO2 Enterprise Integrator (ESB, DSS, BPS and MB) at [1]. It also explains how to manage environment specific variables.
[1] https://docs.wso2.com/display/EI611/WSO2+Enterprise+Integrator+Best+Practices
Finally what I've done (and what is working since some weeks now) is for each of my project :
Create a "master" maven project that will contain:
An ESB project
One registry project per environment that contains all environment dependant variables (like hosts, passwords, paths, ...)
One Composite Application project per environment that will package the ESB project with the correct registry values (Note that if you deploy everything in the ESB event the registry project must be considered as a "EnterpriseServiceBus" Role)
Next step will be to integrate everything in jenkins and automatize the building of car with maven.