Error when deploying File Connector in WSO2 EI - wso2

below is the error I receive after deploying my car file onto wso2 EI 6.6.0 / 7.0.0.
enter image description here
I created the file transfer method on integration studio 8.0.1 and I added a file connector mediator version 4.0.11.
After I deploy the car file on EI I get a
ERROR {org.wso2.carbon.mediation.library.service.MediationLibraryAdminService} - Unable to update status for : {org.wso2.carbon.connector}file :: Template configuration : null cannot be built for Synapse Library artifact
Followed by a error displaying :
ERROR {org.apache.synapse.mediators.template.InvokeMediator} - Sequence template org.wso2.carbon.connector.file.init cannot be found
First, the synapse-libs folder was empty so I added the file connector zipped file into it and then the {org.wso2.carbon.connector}file was created in the imports folder. But even after doing this, I still get that error back.
I have tried connecting locally as well as ftp to local.
I used a YouTube video to implement this transfer : https://www.youtube.com/watch?v=V2q7Y176Euw
These are my Sequences :
enter image description here
enter image description here
My Scheduled task:
enter image description here
My Local entries:
enter image description here
enter image description here
Any Assistance will be much appreciated.

Related

How to process salesforce error files in informatica cloud

we are integrating data into salesforce from Hive database source using informatica cloud.
But when we use SFDC standard API,its generating error file with all rejected records.
Here we have to re-process the error file to make sure all source records loaded into salesforce. The error file will be generated in unix box
path: \apps\Data_Integration_Server\data\error.
The file name will be in the format _timestamp.csv. ex: s_mtt_0103YB0Z000000000012_3_26_2020_12_21_standard_error.csv
can anyone pls help me i didn't find answers in Informatica cloud platform community.

GCP create function zip upload error without description

I'm trying to create a simple GCP Cloud Function via GCP console but the zip upload fails every time without a detailed reason:
the zip file includes the source files (and not a file with the source files).
in that way, the function isn't being created. I've tried to search online but couldn't find an answer.
screenshot of the error message
Thanks.

Google Cloud AI Platform: Image Data Labeling Service Error - Image URI not valid

Error in Google Cloud Data labeling Service:
I am trying to create a dataset of images in Google's Data labeling service.
Using a single image to test it out.
Created a Google storage bucket named: my-bucket
Uploaded an image to my-bucket - image file name: testcat.png
Created and uploaded a csv file (UTF-8) with URI path of image stored inside it.
image URI path as stored in csv file: gs://my-bucket//testcat.png
Named the csv file : testimage.csv
Uploaded the csv file in the gs bucket - my-bucket.
i.e. testimage.csv, and testcat.png are in the same google storage bucket (my-bucket).
When I try to create the datasset in google console, GCP gives me the following error message:
** Failed to import dataset gs://my-bucket/testcat.png is not a valid
youtube uri nor a readable file path.**
I've checked multiple times and the URI for this image in Google is exactly the same as what I've used. I've tried at least 10-15 times ... the error persists.
Any one faced and successfully resolved this issue?
Your help is greatly appreciated.
Thanks!
As you can see in our AI Platform Data Labeling Service documentation, there is a service update due to the coronavirus (COVID-19) health emergency that states that data labeling services are limited or unavailable until further notice.
You can't start new data labeling tasks through the Cloud Console, Google Cloud SDK, or the API
You can request data labeling tasks only through email at cloudml-data-customer#google.com
New data labeling tasks can't contain personally identifiable information

Tibco 7.14 connectivity issues with Hive EMR (AWS) using JDBC driver

Unable to view the table contents in Tibco Spotfire Analytics Explorer.
I have copied the hive EMR jar files to library folder in tibco server. Once the server is up, I tried to configure the data source in Information Designer. I am able to setup the datasource and I can see schema. But when I tried to expand the table, I am not getting any results
This is the error message I am getting here
Error message: An issue occurred while creating the default model. It may be partially constructed.
The data source reported a failure.
InformationModelException at Spotfire.Dxp.Data:
Error retrieving metadata: java.lang.NullPointerException (HRESULT: 80131500)
The following is the URL template in Information Designer
jdbc:hive2://<host>:<port10000>/<database>
which I changed to
jdbc:hive2://sitename:10000/db_name.
Please let me know what need to be changed in the driver config or any other place to see the contents of the table.

Unable to upload bpmn (.bar) file in BPS Server

I am trying to upload a bpmn (designed in eclipse using Activiti) in the BPS server. I have done all the required steps but still the .bar file present in deployment folder is not getting uploaded on the server.
The error I'm facing is : Removing the faulty archive : Order_approval.bar
There is no way to remove activities from assemly,The only solution to your problem is, check the log to see problem releted to youe webdynpor DC. then rectify your web dynpro DC.
or create an activity which will delete your webdynpro DC. 4shared