Azure Cosmos DB Stuck in 'Creating' status - visual-studio-2017

My Azure CosmosDB account has a status of 'Creating' and I am unable to delete it. I deployed the account from Visual Studio to my Azure subscription as I have done successfully in the past. However, this time, it got stuck in the 'Creating' status and doesn't seem to be making any progress.
I have tried to delete the account but I get the following error message:
There is already an operation in progress which requires exlusive lock on this service mv-iot-messages. Please retry the operation after sometime.
ActivityId: 6b46b569-661e-4be5-8ccc-90d158a09047, Microsoft.Azure.Documents.Common/2.5.1 (Code: PreconditionFailed)
I have attempted to delete the account from both the Azure Portal as well as from the Microsoft Azure Command prompt and I get the same message.
N/A
I expect the Cosmos DB account to fully deploy, after which, I can delete it or a way to override the 'exclusive lock'.

Related

PERMISSION_DENIED for BigQuery Storage API on Apache Beam 2.39.0 and DataFlow runner

I have the following error for one of my DataFlow Jobs:
2022-06-15T16:12:27.365182607Z Error message from worker: java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: BigQuery Storage API has not been used in project 770406736630 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/bigquerystorage.googleapis.com/overview?project=770406736630 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
The same code works fine with Apache Beam 2.38.0. I tested multiple times and this is not a temporary issues. The project number mentioned in the error (770406736630) is not mine.
Any idea why I get this error?
I had the same issue. I'm using Spring Cloud GCP and hadn't set the spring.cloud.gcp.project-id property, which I'm guessing makes the SDK or API use some default value.
I don't know how you've set up you environment, because you haven't specified, but look into how you can explicitly set the project id. You can get it from the dialog for selecting a project in GCP Console.
I just ran into this, and simply needed to re-authenticate with the gcp cli by running gcloud auth application-default login.
The error happens for the latest Apache Beam SKD (2.41.0) when BigQueryIO.Write.Method.STORAGE_WRITE_API is used and destination does not specify the project name. For example dataset.table instead of project-id:dataset.table
This is the solution that worked for me:
BigQueryIO.writeTableRows()
.to("project-id:dataset.table")
.withMethod(BigQueryIO.Write.Method.STORAGE_WRITE_API)
For some reason the Apache Beam implementation for BigQuery Write Storage API does not handle this situation even though it works fine for FILE_LOADS method.
You may also receive a sightly different error for the latest Beam SDK.
Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.RuntimeException:
java.lang.RuntimeException:
java.lang.RuntimeException: com.google.api.gax.rpc.PermissionDeniedException:
io.grpc.StatusRuntimeException:
PERMISSION_DENIED: Permission denied: Consumer 'project:null' has been suspended.

GCP Data Fusion no discoverable foud error

I'm trying to use GCP Data Fusion Basic Edition with Private IP option, buth when I try to create a pipeline every action gives me this error
No discoverable found for request POST /v3/namespaces/system/apps/pipeline/services/studio/methods/v1/contexts/default/validations/stage HTTP/1.1
any suggestion on how to solve this issue
Thanks
This error is indicative of Pipeline Studio service being down. Check the status of Pipeline Studio in System Admin and look at the logs as described here.
You can restart the pipeline studio service by going to System Admin > Configuration > Make HTTP Call.
Change the method to POST and set path to namespaces/system/apps/pipeline/services/studio/start
You can validate your pipeline once pipeline studio status becomes green.

Stackdriver Error Email Notifications not Sending

I want to get notifications by email for errors on my google cloud service.
It seemed pretty easy to setup. I just hit turn on notifications in Error Reporting in Stackdriver for all services (I only have 1 service).
I created some errors for testing, but didnt receive any emails.
Went into alert policy and profiles and added email notification as a channel. Still not notifications via email.
What am I doing wrong?
I think the best way to check if the notification is actually being sent is to post a test error message manually to the error reporting API.
First, verify that error notifications are available for the project, and then, run the following command on your cloud shell:
gcloud beta error-reporting events report --verbosity=debug --service Manual --service-version test1 \
--message "java.lang.TestError: msg
at com.example.TestClass.test(TestClass.java:51)
at com.example.AnotherClass(AnotherClass.java:25)"
You should receive an email from StackdriverNotifications-noreply#google.com
If you receive the notification for the test, your other errors might be misconfigured. If you don't receive it, then you might need to verify that the noreply address is not being blocked by your provider/filters.

Authentication with Cognito - where to find logs

We have 2 React Native app are using AWS Cognito for authentication. We use library react-native-aws-cognito-js in our code. The apps are working fine until these 2 days. Apps are experiencing intermittent "Internal Server Error".
How can I find more information about this error? Any tool can help us pinpoint the cause?
Update
From CloudTrail, each API call has an event "CreateNetworkInterface". Many of such API calls have error code "Client.NetworkInterfaceLimitExceeded". What is the cause and solution to this?
According to this AWS Doc (in Chinese), CloudWatch will not write to log when error is due to insufficient IP/ENI. That explains the increase in error number but no logs in CloudWatch.
Upate 2
We have found a scheduled Lambda job which may exhausted IP addresses. We stopped the batch job. But still can't have too many user login to server due to "Client.NetworkInterfaceLimitExceeded" error. I realized that there are many "CreateNetworkInterface" event and few "DeleteNetworkInterface" event. How can I "clean up / reset" all network interface in VPC?
Short answer: Cloud Trail.
Long answer with a suggestion
Assuming your application code is fine, most likely the cause of your 500 error is based on Cognito's initial limitations (e.g., number of calls per user): https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html.
AWS suggests to use Cloud Trail, for logging Api calls.
However I would suggest, to prove the limitations first, add some logs around the api call yourself, and in development you could call your app/api with a high number of calls; and most likely you will see the 500 error due to the limitations.
You could do the following in the terminal:
for i in `seq 1 1000`; do curl --cookie SecureCookie=TokenValueFromAWS http://localhost:desirablePort/SecuredPath; done

How do I test the Azure Webjobs SDK projects locally?

I want to be able to test an Azure WebJobs SDK project locally, before I actually publish it to Azure.
If I make a brand new Azure Web Jobs Project, I get some code that looks like this:
Program.cs:
// To learn more about Microsoft Azure WebJobs SDK, please see http://go.microsoft.com/fwlink/?LinkID=320976
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
Functions.cs:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
log.WriteLine(message);
}
}
I would like to get around to testing whether or not the QueueTrigger function is working properly, but I can't even get that far, because on host.RunAndBlock(); I get the following exception:
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Microsoft Azure WebJobs SDK Dashboard
connection string is missing or empty. The Microsoft Azure Storage
account connection string can be set in the following ways:
Set the connection string named 'AzureWebJobsDashboard' in the connectionStrings section of the .config file in the following format
, or
Set the environment variable named 'AzureWebJobsDashboard', or
Set corresponding property of JobHostConfiguration.
I ran the storage emulator, and set the Azure AzureWebJobsDashboard connection string like so:
<add name="AzureWebJobsDashboard" connectionString="UseDevelopmentStorage=true" />
but, when I did that, I'm getting a different error
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Failed to validate Microsoft Azure WebJobs SDK
Dashboard account. The Microsoft Azure Storage Emulator is not
supported, please use a Microsoft Azure Storage account hosted in
Microsoft Azure.
Is there any way to test my use of the WebJobs SDK locally?
WebJobs 2.0 now works using development storage (I'm using v2.0.0-beta2).
Note that latency in general and Blob triggers in particular are currently far better than you can get in production. Design with care.
If you want to test the WebJobs SDK locally, you need to set up a storage account in Azure. You can't test it against the Azure Emulator. That's what that error is telling you.
Failed to validate Microsoft Azure WebJobs SDK Dashboard account. The Microsoft Azure Storage Emulator is not supported, please use a Microsoft Azure Storage account hosted in Microsoft Azure.
So to answer your question, you can create a storage account in Azure using the portal, and then set up your connection string in the app.config of your Console Application. Then just drop a message to the queue and run the Console Application locally and it will pick it up (assuming you're trying to interact with the queue obviously).
Make sure that you replace the [QueueTrigger("queue")] "queue" with the name of the queue you want to poll.
Hope this helps