I have created a console app and managed to upload it to the cloud, and I have scheduled it to run every 15 minutes. The console app runs for the first time with success as result and thens fails stating an error in the connection string. Could someone shed light on this please. Would be greatly apprecited.
Thanks
The error message follows:enter image description here
Make sure that you are setting a connection string named AzureJobsRuntime in your Windows Azure Website configuration with a value similar to DefaultEndpointsProtocol=https;AccountName=NAME;AccountKey=KEY pointing to the Windows Azure Storage account where the Windows Azure WebJobs Runtime logs are stored.
Please visit the article about configuring connection strings for more information on how you can configure connection strings in your Windows Azure Website.
To clarify a couple of possible gotchas (adding to the accepted answer):
Set these values by going to
App Services -> Your Web App -> Settings / "All Settings" -> Application Settings -> (In page under header) Connection strings
There you'll find Name, Value, and a Type drop down.
Name: Do NOT put your storage account name here! Rather, this is where you put AzureWebJobsDashboard for one connection string and on the next AzureWebJobsStorage. The value for these should look like:
DefaultEndpointsProtocol=https;AccountName=<mysupercoolblobstorageaccountname>;AccountKey=<blahblah==>
-- Old Portal --
I've had problems with this before where it was fixed in the old portal, so for that sake:
Old Portal: Your website -> Configure tab -> under 'connection strings', enter two new values: a) dropdown type CUSTOM, for NAME do NOT enter the name of your storage account! rather Name is: 'AzureWebJobsDashboard' or for the other (enter two entries): 'AzureWebJobsStorage'.
You need to set AzureJobsRuntime as a connection string (for an Azure storage account), you can do that on the Azure portal under: Websites --> Your Website --> CONFIGURE tab --> connection strings.
Web Job is not able to figure out the connectionString value in appsettings.json file. There could be two scenario:
If you are using an emulator, try adding this in your appsettings.json file
{
"ConnectionStrings": {
"AzureWebJobsDashboard": "UseDevelopmentStorage=true",
"AzureWebJobsStorage": "UseDevelopmentStorage=true"
}
}
If you are trying to connect directly to your Azure portal
{
"ConnectionStrings": {
"AzureWebJobsDashboard": "url",
"AzureWebJobsStorage": "url"
}
}
Related
Note: This is not new, but I have some new insights on it.
For about three weeks now I regularly try to deploy the development-schema of my CloudKit Container to production, using the CloudKit Dashboard:
It spins for exactly a minute to then tell me "There was a problem loading the environment's status"
This is not new, many other questions face this as well:
Error CloudKit Dashboard - There was a problem loading the environment's status
Does iCloud need to be in the Production environment in order to use in Production?
iCloud dashboard: Cannot deploy CloudKit schema to Production
Apple support told me to
look at https://developer.apple.com/forums/thread/656723 (try again after a day with stable network)
use Safari and resetting browser settings to clear cache and cookies
"You may also try creating a new CloudKit container, rebuilding your schema, and then try again." => obviously doesn't work, because users have data on production
TL;DR:
Kill the timeout by running this in the console:
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(the response is undefined — that's okay)
How I got there
So I started to look at the requests the site makes to the backend when I click "deploy". Chrome shows that the request to
https://p39-ckdatabasews.icloud.apple.com/r/v3/user/<container-name>/production/public/admin/deployment/status?team_id=<team-id>
is cancelled after 1.0 min.
Insight 1
The problem is with the production schema. I had used the Reset Development Environment before to make sure I hadn't messed that up myself, but this would have spared me that.
I used the Copy as cURL command (in Chrome, because it also copies the auth cookies, which Safari does not) and ran it in Terminal.
Interestingly, that does respond after 1'37 min. That's also what the X-Apple-Edge-Response-Time: 97244 header says.
If you know what to look for, the console will also tell you the the request timed out:
Insight 2
The server takes too long to respond (> 1min) and the client script times out (at 1 min)
Note: You can also get a response by right-clicking the request in Chrome and choosing "Replay XHR".
Solution
I tried to understand the JavaScript that sends the XHR request and modify the timeout, but I failed. However, you can apparently clear all timeouts that exist with
var id = window.setTimeout(function() {}, 0);
while (id--) {
window.clearTimeout(id); // will do nothing if no timeout with id is present
}
(from https://stackoverflow.com/a/8860203)
Running that while waiting for the response actually worked for me!
I have created a CDS connection in PowerApp platform under particular environment which I can see in list there as follows:
But when I go to PowerApp admin portal and try to use these connection to create a ConnectionSet then only single connection appears there, which is of Dynamics 365 for Operation. I don't see any other connection to choose from to proceed with integration task. My intention is to integrate the CDS data from associated Dynamics 365 for Talent to Dynamics 365 for Operation.
Screenshot of ConnectionSet creation step with only available connection:
Please let me know what I have missing and hence the other connection not appearing in list.
To tell if a connection can be used by Data Integration, please try the instructions below.
Log on to https://admin.powerapps.com/environments using Chrome
Enable network trace (F12) for the browser, and switch to [Network] tab
Type [integratorApp] in the filter editor box
Power Apps Admin Portal
Click [Data integration] to switch to Data Integration page
In the network trace tab, look for [targetTypes] and in the [Preview] tab, observe a list of supported types and their corresponding apiId’s.
For [Common data service], the type is [CDS2] and apiId [/providers/Microsoft.PowerApps/apis/shared_commondataservice]
Supported Target types
In the [Network] tab, type [powerApps] in the filter edit box
Click on [+New connection set] to open [Connection Set] dialog
In the [Network] tab, look for [connections?api-version…]. There might be more than one calls if you have multiple PowerApps environment.
Switch to [Preview] in the response, observer connections that have apiID’s matching what are supported, and check if the connection in question is listed.
Connections
I've just had a bit of fun trying to connect to a new VM I'd created, I've found loads of posts from people with the same problem, the answer details the points I've found
(1) For me it worked with
<VMName>\Username
Password
e.g.
Windows8VM\MyUserName
SomePassword#1
(2) Some people have just needed to use a leading '\', i.e.
\Username
Password
Your credentials did not work Azure VM
(3) You can now reset the username/password from the app portal. There are powershell scripts which will also allow you to do this but that shouldn't be necessary anymore.
(4) You can also try redeploying the VM, you can do this from the app portal
(5) This blog says that "Password cannot contain the username or part of username", but that must be out of date as I tried that once I got it working and it worked fine
https://blogs.msdn.microsoft.com/narahari/2011/08/29/your-credentials-did-not-work-error-when-connecting-to-windows-azure-vms/
(6) You may find links such as the below which mention Get-AzureVM, that seems to be for classic VMs, there seem to be equivalents for the resource manager VMs such as Get-AzureRMVM
https://blogs.msdn.microsoft.com/mast/2014/03/06/enable-rdp-or-reset-password-with-the-vm-agent/
For complete novices to powershell, if you do want to go down that road here's the basics you may need. In the end I don't believe I needed this, just point 1
unInstall-Module AzureRM
Install-Module AzureRM -allowclobber
Import-Module AzureRM
Login-AzureRmAccount (this will open a window which takes you through the usual logon process)
Add-AzureAccount (not sure why you need both, but I couldn’t log on without this)
Select-AzureSubscription -SubscriptionId <the guid for your subscription>
Set-AzureRmVMAccessExtension -ResourceGroupName "<your RG name>" -VMName "Windows8VM" -Name "myVMAccess" -Location "northeurope" -username <username> -password <password>
(7) You can connect to a VM in a scale set as by default the Load Balancer will have Nat Rules mapping from port onwards 50000, i.e. just remote desktop to the IP address:port. You can also do it from a VM that isn't in the scale set. Go to the scale set's overview, click on the "virtual network/subnet", that'll give you the internal IP address. Remote desktop from the other one
Ran into similar issues. It seems to need domain by default. Here is what worked for me:
localhost\username
Other option can be vmname\username
Some more guides to help:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal#connect-to-virtual-machine
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/connect-logon
In April 2022 "Password cannot contain the username or part of username" was the issue.
During the creation of VM in Azure, everything was alright but wasn't able to connect via RDP.
Same in Nov 2022, you will be allowed to create a password that contains the user name but during login it will display the credential error. Removing the user name from the password fixed it.
I want to be able to test an Azure WebJobs SDK project locally, before I actually publish it to Azure.
If I make a brand new Azure Web Jobs Project, I get some code that looks like this:
Program.cs:
// To learn more about Microsoft Azure WebJobs SDK, please see http://go.microsoft.com/fwlink/?LinkID=320976
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
Functions.cs:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
log.WriteLine(message);
}
}
I would like to get around to testing whether or not the QueueTrigger function is working properly, but I can't even get that far, because on host.RunAndBlock(); I get the following exception:
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Microsoft Azure WebJobs SDK Dashboard
connection string is missing or empty. The Microsoft Azure Storage
account connection string can be set in the following ways:
Set the connection string named 'AzureWebJobsDashboard' in the connectionStrings section of the .config file in the following format
, or
Set the environment variable named 'AzureWebJobsDashboard', or
Set corresponding property of JobHostConfiguration.
I ran the storage emulator, and set the Azure AzureWebJobsDashboard connection string like so:
<add name="AzureWebJobsDashboard" connectionString="UseDevelopmentStorage=true" />
but, when I did that, I'm getting a different error
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Failed to validate Microsoft Azure WebJobs SDK
Dashboard account. The Microsoft Azure Storage Emulator is not
supported, please use a Microsoft Azure Storage account hosted in
Microsoft Azure.
Is there any way to test my use of the WebJobs SDK locally?
WebJobs 2.0 now works using development storage (I'm using v2.0.0-beta2).
Note that latency in general and Blob triggers in particular are currently far better than you can get in production. Design with care.
If you want to test the WebJobs SDK locally, you need to set up a storage account in Azure. You can't test it against the Azure Emulator. That's what that error is telling you.
Failed to validate Microsoft Azure WebJobs SDK Dashboard account. The Microsoft Azure Storage Emulator is not supported, please use a Microsoft Azure Storage account hosted in Microsoft Azure.
So to answer your question, you can create a storage account in Azure using the portal, and then set up your connection string in the app.config of your Console Application. Then just drop a message to the queue and run the Console Application locally and it will pick it up (assuming you're trying to interact with the queue obviously).
Make sure that you replace the [QueueTrigger("queue")] "queue" with the name of the queue you want to poll.
Hope this helps
I have installed GitLab Omnibus Community Edition 8.0.2 for evaluation purpose. I am trying to connect Gitlab (Linux AMI on AWS) with our on-premise LDAP server running on Win 2008 R2. However, i am unable to do so. I am getting following error (Could not authorize you from Ldapmain because "Invalid credentials"):
Here's the config i'm using for LDAP in gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: 'XX.YYY.Z.XX'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
password: 'pwd1234'
active_directory: true
allow_username_or_email_login: true
base: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
user_filter: ''
EOS
There are two users: gitlab (newly created AD user) and john.doe (old AD user)
Both users are able to query all AD users using ldapsearch command but when i use their respective details (one at a time) in gitlab.rb and run gitlab-rake gitlab:ldap:check command, it displays info about that particular user only and not all users.
Earlier, gitlab-rake gitlab:ldap:check was displaying first 100 results from AD when my credential (john.doe) was configured in gitlab.rb file. Since this was my personal credential, i asked my IT team to create a new AD user (gitlab) for GitLab. After i configured new user (gitlab) in gitlab.rb file and ran gitlab-rake gitlab:ldap:check, it only displayed that particular user's record. I thought this might be due to some permission issue for the newly-created user so i restored my personal credentials in gitlab.rb. Surprisingly, now when i run gitlab-rake gitlab:ldap:check, i get only one record for my user instead of 100 records that i was getting earlier. This is really weird! I think, somehow, GitLab is "forgetting" previous details.
Any help will really be appreciated.
The issue is resolved now. Seems like it was a bug in the version (8.0.2) i was using. Upgrading it to 8.0.5 fixed my issue.
Also, values of bind_dn and base that worked for me are:
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
base: 'OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'