Google Cloud platform terraform/terragrunt googleapi: Error 409: Requested entity already exist - google-cloud-platform

I am having a strange issue when trying to push code out to our gcp repo. It fails with the following error "googleapi: Error 409: Requested entity already exists, alreadyExists" and it is referring to a project that already exists() This only occurs after i either remove another project that's no longer needed or add .bck to the terragrunt.hcl files. These projects have no dependancies on each other whatsoever.
terraform {
source = "../../../modules//project/"
}
include {
path = find_in_parent_folders("org.hcl")
}
dependency "folder" {
config_path = "../"
# Configure mock outputs for the terraform commands that are returned when there are no
outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
folder_id = "folder-not-created-yet"
}
}
inputs = {
project_name = "<pimsstest-project>"
folder_id = dependency.folder.outputs.folder_created # Test folder id
is_service_project = true
code push will fail with the structure in VS code is like this:
But it succeeds when like this
Some background to add. Pimsstest used to exist in a production folder under org and i moved it to test via vs code with a simple cut and paste and re push of code. I then removed the project from the console as it still existed in production. I cannot work out why the removal of another project will flag up this existing error message on pimsstest. It doesn't make any sense to me.

Across GCP a project ID can exist once only. Upon removal, it might not be instantly available again (it will always have status "scheduled for removal" - and you should receive an email, with the details of the scheduled operation). What the error message actually is trying to tell may be:
Error 409: Requested entity already STILL exist.
In theory (even if it's unlikely, when the name is unique enough), any other customer could snatch the name in the meanwhile - in which case the error message could be literally understood.

Related

WSO2 APIM custom error messages removed after restart

I added some custom error messages to the APIM according to documentation https://apim.docs.wso2.com/en/4.0.0/troubleshooting/error-handling/ - I created custom file in
<API-M_HOME>/repository/deployment/server/synapse-configs/default/sequences and added references to that file in some of the default files in that directory (so that it is called to transform error message).
Everything seemed to be working just fine until the restart of WSO2 - after that, changes made to default files were present, but the custom file was removed, so that custom error message handling didn't work.
I resolved this by adding non-removable attribute (chattr +i) to the file, but I wonder is there other, more elegant way to prevent the file from being deleted every time restart is being made?
There are 'template' files placed in: <API-M_HOME>/repository/resources/apim-synapse-config. Maybe, those files are overriding files in the ../synapse-configs/default/ location.
Second thing, which came on my mind, is using specific High Avability scenario. Where artifacts are shared files in system as the content synchronization mechanism, it can override local changes.
At the startup gateway removes these files. You can add the following configuration to the deployment.toml and place the file in the sequence directory.
Sample Config:
[apim.sync_runtime_artifacts.gateway.skip_list]
apis = ["api1.xml","api2.xml"]
endpoints = ["endpoint1.xml"]
sequences = ["post_with_nobody.xml"]
local-entries = ["file.xml"]
For your case:
[apim.sync_runtime_artifacts.gateway.skip_list]
sequences = ["name_of_the_file.xml"]
Refer - https://apim.docs.wso2.com/en/latest/install-and-setup/setup/distributed-deployment/deploying-wso2-api-m-in-a-distributed-setup/#configure-the-gateway-nodes

How can I fix: Terraform error refreshing state: state snapshot was created by Terraform v0.14.5, which is newer than current v0.13.0

I am trying to upgrade my terraform version from 0.12 to 0.13 but had previously init and planned with globally install terraform 0.14.5.
I'm struggling to understand how this effects the snapshot and/or I can remove this error, remote state hasn't changed so where is it getting this from? I have removed any .terraform in the directory.
Terraform is holding its state either in a remote backend or in a local one.
If you have no configuration that looks like this in your configuration files, minding that the backend type might differ based on the one used, so the name in "..." might vary:
terraform {
backend "..." {
}
}
Then it would be safe to assume you have a local JSON state file named terraform.tfsate, and also, since your project existed before the upgrade, a file terraform.tfsate.backup.
If you peak into those files, you will see the version of terraform used to create the said state in the beginning of the file.
For example:
{
"version": 4,
"terraform_version": "0.14.5",
}
From there, and with all the caution in the world, ensuring you indeed didn't change anything in the remote state, you have some options:
if your file terraform.tfsate.backup still have "terraform_version": "0.13.0", you could just make a rollback by removing the terraform.tfsate and renaming terraform.tfsate.backup to terraform.tfsate
you can try to "hack" into the actual terraform.tfsate and change the version there by adapting the line "terraform_version": "0.14.5"
As advised in the below link, you could create a state version using the API, so, overriding the state by manually specifying your expected version terraform_version
My advise still, would be to make a diff of terraform.tfsate against terraform.tfsate.backup to see what have possibly changed, or use a versioning tool if your terraform.tfsate is under version control.
Useful read: https://support.hashicorp.com/hc/en-us/articles/360001147287-Downgrading-Terraform-Version-in-the-State

Cannot build Corda (release/os/4.6) on Ubuntu 20.04 due to failing tests (CryptoUtilsTest & NodeH2SecurityTests)

I am trying to build Corda on Ubuntu 20.04. I have the latest sources from the git repo (release/os/4.6) and I run ./gradlew build in the main folder. However the build fails during two tests (see the detail description below). Is there something that I'm missing? Are there some special flags that I should use for building Corda?
First, the test test default SecureRandom uses platformSecureRandom fails at the last assert, i.e.,
// in file net/corda/core/crypto/CryptoUtilsTest.kt
fun `test default SecureRandom uses platformSecureRandom`() {
// [...]
// Remove Corda Provider again and add it as the first Provider entry.
Security.removeProvider(CordaSecurityProvider.PROVIDER_NAME)
Security.insertProviderAt(CordaSecurityProvider(), 1) // This is base-1.
val secureRandomRegisteredFirstCordaProvider = SecureRandom()
assertEquals(PlatformSecureRandomService.algorithm, secureRandomRegisteredFirstCordaProvider.algorithm)
}
The reason for the failed test is Expected <CordaPRNG>, actual <SHA1PRNG>..
For some reason, the test is successful if before inserting the provider, I call the getServices() method, i.e.,
val provider = CordaSecurityProvider()
provider.getServices()
Security.insertProviderAt(provider, 1) // This is base-1.
I also tried to get the service SecureRandom.CordaPRNG directly from the provider and it works, i.e,
println(provider.getService("SecureRandom", "CordaPRNG"))
prints out Corda: SecureRandom.CordaPRNG -> javaClass
Second, the test h2 server on the host IP requires non-default database password fails since it expects a CouldNotCreateDataSourceException, but it gets a NullPointerException instead, i.e.,
// in file net/corda/node/internal/NodeH2SecurityTests.kt
fun `h2 server on the host IP requires non-default database password`() {
// [...]
address = NetworkHostAndPort(InetAddress.getLocalHost().hostAddress, 1080)
val node = MockNode()
val exception = assertFailsWith(CouldNotCreateDataSourceException::class) {
node.startDb()
}
// [...]
}
The problem is that the address is 127.0.1.1:1080, which means that net/corda/node/internal/Node.kt::startDatabase() does not throw CouldNotCreateDataSourceException since the condition to enter the branch
if (!InetAddress.getByName(effectiveH2Settings.address.host).isLoopbackAddress
&& configuration.dataSourceProperties.getProperty("dataSource.password").isBlank()) {
throw CouldNotCreateDataSourceException()
}
is not satisfied. Instead it calls toString() on the parent of the path given by the DB name; the parent is null, and thus, it throws NullPointerException, i.e.,
val databaseName = databaseUrl.removePrefix(h2Prefix).substringBefore(';')
// databaseName=my_file
val baseDir = Paths.get(databaseName).parent.toString()
Unfortunately there't a LOT of reasons that building Corda from source might not work.
Here are my recommendations:
it could be a java issue, there's a docs page on the specific version of java 8 that you need to use, (latest java support is on the roadmap for corda 5 🔥) Here's the docs page on that https://docs.corda.net/docs/corda-os/4.5/getting-set-up.html
Like Alessandro said, you'll want to be aware that 4.6 isn't generally available yet so there may well be bugs and problems in the code until the release. In addition just take another look at the docs page on building corda (here: https://docs.corda.net/docs/corda-os/4.5/building-corda.html#debianubuntu-linux), it's mentioned to use 18.04 but not the latest linux, as there might be some random clib things that could get in the way there.
Good luck to you.

SmtpClient.send - Could not find a part of the path

I'm trying to write an email to my local folder. I successfully wrote an email to my documents folder using this code:
using (var client = new SmtpClient())
{
client.UseDefaultCredentials = true;
client.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory;
client.PickupDirectoryLocation = tempDocsPath;
client.Send(message);//Writes to the PickupDirectoryLocation
}
However, when I ported this same code to another project, it gives me this error:
System.Net.Mail.SmtpException : Failure sending mail. ---> System.IO.DirectoryNotFoundException : Could not find a part of the path 'C:\Users\josh.bowdish\source\repos\GenerateEmail\GenerateEmail\bin\Debug\net461\tempFiles\AAMkAGUyODNhN2JkLThlZWQtNDE4MS1hODM1LWU0ZDY4Y2NhYmMxOQBGAAAAAABKB1jlHZSIQZSWN7AYZH2SBwDZdOTdKcayQ5NMwcwkNT7UAAAAAAEMAADZdOTdKcayQ5NMwcwkNT7UAACn\0a5b24a5-d625-4ecd-9990-af5654679820.eml'.
I've verified that the directory it's trying to write to exists, even rewrote it to look like this:
private static string WriteEmail(MailMessage message, string messageDirectory)
{
if (Directory.Exists(messageDirectory))
{
using (var client = new SmtpClient())
{
client.UseDefaultCredentials = true;
client.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory;
client.PickupDirectoryLocation = messageDirectory;
client.Send(message);//Writes to the PickupDirectoryLocation
}
...
}
//stuff that returns the full email path
}
It breaks on the client.Send() line with the above error. As far as I can tell the code paths are identical. I've tried writing to the same folder that the other project is working with to no avail. The only thing I can think of is it's trying to write the email file before it exists, but the other project is writing it just fine.
Can someone tell me what is generating this error?
Thanks,
~Josh
This could be a permissions problem. Ensure that the account that your application is running under has permissions to "write" to this directory. Your Directory.Exists could be passing since it is only checking if the directory is there, but failing when trying to actually write to it.

Nuclio - "/bin" is not a valid file

When I try to run Nuclio, I receive the following error:
nuclio\plugin\fileSystem\reader\FileReaderException "/bin" is not a valid file.
This is a new installation with a custom application. I moved the application into a folder named "private".
What should I do to fix the problem?
This is likely due to the application not receiving a correct config path.
In your init.hh file, add an args key and provide the application constructor parameters as shown in the below example:
<?hh //patial
return HH\Map {
'sampleApp\\SampleApp' => HH\Map {
'autoInit' => true,
'args'=>HH\Vector
{
'/', //URI Binding
__DIR__.'/sampleApp/config' //Config Dir
}
}
};
Without this, the Application plugin will try to search for config but eventually give up giving the resulting error.
We'll make the error more obvious in a future release.