Connecting to Camunda - camunda

Am not able to connect to Camunda Cloud via nodejs
12:04:22.537 | zeebe | INFO: Error connecting to Camunda Cloud.
12:04:22.656 | zeebe | ERROR: [topology]: Attempt 7 (max: -1).
12:04:22.661 | zeebe | ERROR: [topology]: 14 UNAVAILABLE: DNS resolution failed

Make sure the environment variables documented here: https://docs.camunda.io/docs/apis-clients/cli-client/ are set correctly.
export ZEEBE_ADDRESS='[Zeebe API]'
export ZEEBE_CLIENT_ID='[Client ID]'
export ZEEBE_CLIENT_SECRET='[Client Secret]'
export ZEEBE_AUTHORIZATION_SERVER_URL='[OAuth API]'
https://docs.camunda.io/docs/guides/setup-client-connection-credentials/
should you how to create and download these.
Since the DNS resolution fails, you may have gotten the ZEEBE_ADDRESS wrong.

Related

Migrate to updated APIs

I'm getting an error to migrate API from GKE though I'm not using the said API /apis/extensions/v1beta1/ingresses
I ran the command kubectl get deployment [mydeployment] -o yaml and did not find the API in question
It seems an IngressList is that calls the old API. To check you can use following command, this will give you the entire ingress info.
kubectl get --raw /apis/extensions/v1beta1/ingresses | jq
I have same issue but i have upgraded node version from 1.21 to 1.22

AWS "amplify push" failed with "Command failed with exit code 1: yarn --production"

I was following this tutorial to add a lambda function in my AWS amplify node.js proejct, when I
amplify push
It ends up with this error:
Current Environment: dev
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------- | --------- | ----------------- |
| Function | mylambda | Create | awscloudformation |
| Api | myapi | Create | awscloudformation |
✖ An error occurred when pushing the resources to the cloud
Packaging lambda function failed with the error
Command failed with exit code 1: yarn --production
An error occurred during the push operation: Packaging lambda function failed with the error
Command failed with exit code 1: yarn --production
I tried recreate the amplify project, but ends up with the same error.
Try this:
First go to path:
/myproject-backend/amplify/backend/function/lambdaName/src
Second:
npm install
Later:
amplify push
I was getting the same error. In my case, I had Hadoop yarn installed on my macOS. yarn command on my terminal was invoking Hadoop yarn.
I removed(renamed) the /usr/local/Cellar/hadoop directory to remove the Hadoop. After that, amplify push ran successfully.
I experienced the same issue with Amplify in Ubuntu (WSL). But the windows system had no such issue. I suspect that it was due to yarn not being installed properly in WSL...as yarn --version shows 0.32+git. I am having trouble with uninstalling yarn in WSL...
A follow-up...I figured out what's happening with the yarn in ubuntu...it was contained in cmdtest package - removed that and yarn was also deleted then amplify pushed successfully

How to find the schema of Airflow Backend database?

I am using apache airflow (v 1.10.2) on Google Cloud Composer, and I would like to view the schema of the airflow database. Where can I find this information?
There are couple of ways I can think of comparing our current design.
External metadata DB. If you can connect to the DB then you can get the schema.
From your UI you can go to Data Profiling and run query against the metadata tables(depends on your database types(mysql or postgres etc) and find the information from there and create a schema diagram.
I hope this helps.
According to the Composer architecture design Cloud SQL is the main place where all the Airflow metadata is stored. However, in order to grant authorization access from client application over the GKE cluster to the database we use Cloud SQL Proxy service. Particularly in Composer environment we can find airflow-sqlproxy* Pod, leveraging connections to Airflow Cloud SQL instance.
Saying this, I believe that it will not make any problem establish connection to the above mentioned Airflow database from any of the GKE cluster workloads(Pods).
For instance, I will perform connection from Airflow worker reaching airflow-sqlproxy-service.default Cloud SQL proxy service and further perform DB discovering via mysql command-line util:
kubectl -it exec $(kubectl get po -l run=airflow-worker -o jsonpath='{.items[0].metadata.name}' \
-n $(kubectl get ns| grep composer*| awk '{print $1}')) -n $(kubectl get ns| grep composer*| awk '{print $1}') \
-c airflow-worker -- mysql -u root -h airflow-sqlproxy-service.default
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+----------------------------------------+
| Database |
+----------------------------------------+
| information_schema |
| composer-1-8-3-airflow-1-10-3-* |
| mysql |
| performance_schema |
| sys |
+----------------------------------------+
5 rows in set (0.00 sec)

gcloud project id in node.js error is different from gcloud set project id?

I'm trying to get Google Cloud Vision to work with node.js by following their documentation here. Although I keep getting:
PERMISSION_DENIED: Cloud Vision API has not been used in project 5678.. before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/vision.googleapis.com/overview?project=5678.. then retry
To note though the project number is very different from what I see in gcloud's output when I gather information from the following commands:
gcloud info |tr -d '[]' | awk '/project:/ {print $2}'
'my-set-project' <=== set project id in use
gcloud projects list
which outputs:
PROJECT_ID='my-set-project' // <=== Same id as "gcloud info" command
NAME='my-project-name'
PROJECT_NUMBER=1234.. // <===== Different number from Node.js Error
I have already enabled the api, downloaded a service key and setup the export GOOGLE_APPLICATION_CREDENTIALS=[path/to/my/service/key]. But right now I believe that the service key linkup is not the issue yet as I have not yet really have had gcloud pointing to 'my-set-project'.
I have also found a default.config at
cat /Users/My_Username/.config/gcloud/application_default_credentials.json
which has:
{
"client_id": "5678..-fgrh // <=== same number id as node.js error
So how can I get gcloud-cli to switch to project "1234" which has the API enabled there? I thought doing the command:
gcloud config set project 'my-set-project'
would get running node apps using gcp to use the project of '1234' instead of the default '5678'. Any help will be appreciated as I'm still getting used to the gcloud-cli. Thanks
Try:
gcloud auth activate-service-account --key-file=/path/to/your/service_account.json

lauching an EC2 Instance with AWS Explorer in Visual Studio error

I installed the AWS Explorer for Visual Studio and I tried launching various EC2-Instances. None worked.
The Error Message I get is the following:
So far I tried using the Frankfurt Region and if I am correct, the User Account for Visual Studio has full Admin access. What could be the cause of that problem?
It's difficult to advise without the launch configuration you have used. It could be that you have selected an instance type e.g. t1.micro which is not available in Frankfurt region. You can check available instance type using:
curl -s https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.json | jq -r '.products[].attributes | select(.location == "EU (Frankfurt)" and .tenancy == "Shared") | .instanceType' | sort -u
This blog explains it well: http://rodos.haywood.org/2016/03/which-instances-are-available-in-my.html