i have gone through wso2 doc:
https://docs.wso2.com/display/Governance460/Changing+Storage+Location+of+Services
when i change the location of services in regisrty.xml file.
<staticConfiguration>
<versioningProperties>true</versioningProperties>
<versioningComments>true</versioningComments>
<versioningTags>true</versioningTags>
<versioningRatings>true</versioningRatings>
<!-- Location you want to add service and default location will be /services/ -->
<servicePath>/trunk/services/mylocation</servicePath>
</staticConfiguration>
when ever tried to register a service throwing following exception:
ERROR {org.wso2.carbon.governance.api.common.GovernanceArtifactManager} - Failed to add artifact: artifact id: 153b122e-5b4f-4e8e-bbe0-e7934da571d0, path: /trunk/services/mylocation/sample/com/myservice. Resource does not exist at path /_system/governance/trunk/services/sample/com
org.wso2.carbon.registry.core.exceptions.ResourceNotFoundException: Resource does not exist at path /_system/governance/trunk/services/sample/com
how to resolve the error
Thanks in advance...
Try it in rxt level, Go to Home > Extensions > Configure > Artifact Types > Artifact Source(select the Artifact type) and change the storage path by changing the below line,
<storagePath>/trunk/services/#{namespace}/#{overview_version}/#{name}</storagePath>
Related
After Google Cloud quota update, I can't run terragrunt/terraform code due to strange error. Same code worked before with other project on same account. After I tried to recreate project (to get new clear project) there was some "Billing Quota" popup and I asked support for changing quota.
I got the following message from support:
Dear Developer,
We have approved your request for additional quota. Your new quota should take effect within one hour of receiving this message.
And now (1 day after) terragrunt is not working due to error:
Missing required GCS remote state configuration location
Actually what I got:
service account for pipelines with Project Editor and Service Networking Admin;
bucket without public access (europe-west3)
following terragrunt config:
remote_state {
backend = "gcs"
config = {
project = get_env("TF_VAR_project")
bucket = "bucket name"
prefix = "${path_relative_to_include()}"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
Also i`m running following pipeline
- terragrunt run-all init
- terragrunt run-all validate
- terragrunt run-all plan
- terragrunt run-all apply --terragrunt-non-interactive -auto-approve
and its failing on init with error.'
Project and credentials are correct (also credentials stored in GOOGLE_CREDENTIALS env as json without new lines or whitespaces).
Also tryed to specify "location" in "config" but got error that bucket not found in project.
Does anybody know how to fix or where can be problem?
It worked before I got quota.
I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.
We have a policy in place which restricts resources to EU regions.
When I try to execute a cloud build, gcloud wants to create a bucket (gs://[PROJECT_ID]_cloudbuild) to store staging sources. This step fails, because the default bucket location ('us') is used:
"code": 412,
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’"
As a workaround I tried to pass an existing bucket located in a valid region (using --gcs-source-staging-dir), but I got the same error.
How can this be solved?
Here the HTTP logs:
$ gcloud --log-http builds submit --gcs-source-staging-dir gs://my-custom-bucket/staging \
--tag gcr.io/xxxxxxxxxx/quickstart-image .
=======================
==== request start ====
uri: https://www.googleapis.com/storage/v1/b?project=xxxxxxxxxx&alt=json
method: POST
== headers start ==
accept: application/json
content-type: application/json
== headers end ==
== body start ==
{"name": "my-custom-bucket"}
== body end ==
==== request end ====
---- response start ----
-- headers start --
server: UploadServer
status: 412
-- headers end --
-- body start --
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’"
}
}
-- body end --
---- response end ----
----------------------
ERROR: (gcloud.builds.submit) HTTPError 412: 'us' violates constraint ‘constraints/gcp.resourceLocations’
I found the solution to this problem. After you create the project (with resource location restriction enabled), you should create a new bucket with the name [PROJECT_ID]_cloudbuild in the preferred location.
When you don't do this, cloud build submit will automatically create a bucket in the US, this is not configurable. And because of the resource restrictions, cloud build is not able to create this bucket in the US and this fails with the following error:
ERROR: (gcloud.builds.submit) HTTPError 412: 'us' violates constraint 'constraints/gcp.resourceLocations'
When you create the bucket (by hand) with the same name, cloudbuild will take that bucket as default bucket. The solution was not immediately visible, because for projects that already had the cloudbuild bucket in place when the resource restrictions were applied, the problem did not appear.
Cloud Build will use a default bucket to store logs. You can try to add logsBucket field to the build config file with a specific bucket like in the snippet:
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['install', '.']
logsBucket: 'gs://mybucket'
You can find detailed information about the build configuration file here
I had this issue investigated by Google Support. They came up with the following answer:
After investigating with the different teams involved in this issue,
it seems like the restrictions affecting Cloud Build (in this case,
preventing resource allocation in US) is intended. The only workaround
that came up from the investigation was the creation of a new project
without the restrictions in place that allows the use of Cloud Build.
They also referred to a feature request (already mentioned by in #Christopher P 's comment - thanks for that)
I faced the same issue with location constraints , follow the below steps to fix it
After logging to gcp console, select the required project where you are facing the issue
Then choose "Organization policies" from the IAM console. Make sure you have the necessary permissions for it to get listed
Look for the policy "Google Cloud Platform - Resource Location Restriction" policy in the list. Typically it comes under the category of custom policy
Please refer the this screenshot:
location constraint
Click on the policy, click on edit, drag down in custom values add the location which you want.
using prefix in means it includes everything to mention specific value use location name with out prefix
here " 'us' violates constraint ‘constraints/gcp.resourceLocations"
Enter value us in the custom value add it and save , if it requires all zone in us-east mention prefix in like this
in:us-east4-locations
hope this works
allowed values
Within the AWS -> Elastic Beanstalk (Dashboard) -> Configuration -> Software Configuration -> Environment Properties
When I try to add & configure my "Environment Properties" from the config file ".env.default" of my node.js application which is as follows:
#.env.default
# General settings
TIMEZONE=Europe/Amsterdam
# --------
# Debug-related settings
LOG_LEVEL_CONSOLE=info
LOG_LEVEL_FILE=info
ENABLE_FILE_LOGGING=true
# Whether the local log directory (./logs/) should be preferred over /var/log/
LOG_FILE_PREFER_LOCAL=false
# Override the default logging location (/var/log/ or ./logs/)
# FORCE_LOG_LOCATION=./some-other-directory/
# /../../log/nodejs/
# --------
# Crash-related settings
MAX_CONSECUTIVE_CRASHES=5
CONSECUTIVE_CRASH_RESET_MS=5000
# --------
# Settings relating to remote API access
ENABLE_REMOTE_ACCESS=true
ENABLE_WHITELIST=true
HOST_API=true
HOST_WEB_INTERFACE=true
LISTEN_PORT=8081
JWT_SECRET=ItsASecretToEverybodyMAHBOI
# LISTEN_PORT=1903 backup
#INTERNAL_LISTEN_PORT=1939 backup
# --------
# Settings relating to internal access
INTERNAL_LISTEN_PORT=8083
# --------
# Database-related settings
DATABASE_HOST=acc-sc-3.crmhqy2lzjw4.eu-west-1.rds.amazonaws.com
DATABASE_NAME=acc_schedule_center_3
DATABASE_USER=sc_3
DATABASE_PASS=yCFKIqzLcBIBt1wYj4Qn
MAX_IDLE_TIME=28800
Environment Properties - First Side
Environment Properties - Second Side
Ignore the data that is listed inside the "Property Name" & "Property Value", cause they were from the previous configuration.
The Core Error, I'm facing at the moment is as follows:
ERROR #1
Service:AmazonCloudFormation, Message:Stack named
'awseb-e-4e98c2gukw-stack' aborted operation. Current state:
'UPDATE_ROLLBACK_IN_PROGRESS' Reason: null
ERROR #2
Updating Auto Scaling group named:
awseb-e-4e98c2gukw-stack-AWSEBAutoScalingGroup-1GR8E4SU6QZGJ failed
Reason: Template error: DBInstance aa153clv2zourf2 doesn't exist
ERROR #3
Failed to deploy configuration.
I'm fairly new, or can call me a novice coder or DevOps in general, but would like to know if anyone knows the solution for these errors?
Thanks in advance everyone!
Kind Regards,
Doga
I was able to fix my problem by adding some IAM policies to the EBS user.
I had only the AdministratorAccess-AWSElasticBeanstalk policy and after I added the AWSElasticBeanstalkRoleRDS policy it worked.
I had pretty much the same issue. For me it was the permission level of the IAM user that I was using for EB. It had EB Full permissions, but I needed to give it permissions to access other services as well.
I have a WSO2 Goverance Registry setup conformant to this blog post http://blog.shelan.org/2013/02/application-governance-with-wso2-greg.html.
When defining a new application in the WSO2 GR using the menu: Metadata > Add > Application I would like to be able to directly add the actual application artifact (war/car file).
The selected file should then by placed in the SVN location conforming to the initial state of the lifecycle to which I will bind the application. This of course implies that I would also need to be able to directly add the lifecycle when defining a new application.
The new application form would then be something like this:
Name: ExampleApplication-1.0.0
Type: .war (is now redundant)
Description: My Example Application Artifact: Selected file
ExampleApplication-1.0.0.war Lifecyle: MyDTAP-Lifecycle_v1
Does anybody know a good starting point for adding this functionality in terms of code hooks or extension points?
If I have understood you correctly, what you need to do is basically provide an file upload option in your "Application" RXT (Governance Artifact Configuration) which will upload what ever your file type and based on that you want to fill the derivable information to the meta data of the artifact. And also to attach a selected/pre defined life cycle to it at artifact creation. What you are looking for is Registry Handlers [1]. You can achieve all aforementioned tasks probably through a single handler.
[1] - http://docs.wso2.org/wiki/display/Governance453/Handlers