Microsoft Redis Session State Provider on AWS issue - amazon-web-services

I'm building a web farm environment for an ecommerce package. The package runs on .NET + AWS.
The package gives the ability to store session state in Redis Cache. The problem is that when I use Amazon Redis (v 2.82 I think) I get connectivity exceptions when trying to use the cache. No such issues on Azure.
the only difference that I see between the configs is that azure requires password whereas AWS does not.
System.NullReferenceException: Object reference not set to an instance of an object.
at StackExchange.Redis.ServerEndPoint.get_LastException()
at StackExchange.Redis.ExceptionFactory.GetServerSnapshotInnerExceptions(ServerEndPoint[] serverSnapshot)
at StackExchange.Redis.ExceptionFactory.NoConnectionAvailable(Boolean includeDetail, RedisCommand command, Message message, ServerEndPoint server, ServerEndPoint[] serverSnapshot)
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor`1 processor, ServerEndPoint server)
at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor`1 processor, ServerEndPoint server)
at StackExchange.Redis.RedisDatabase.ScriptEvaluate(String script, RedisKey[] keys, RedisValue[] values, CommandFlags flags)
at Microsoft.Web.Redis.StackExchangeClientConnection.<>c__DisplayClass4.<Eval>b__3()
at Microsoft.Web.Redis.StackExchangeClientConnection.RetryForScriptNotFound(Func`1 redisOperation)
at Microsoft.Web.Redis.StackExchangeClientConnection.RetryLogic(Func`1 redisOperation)
at Microsoft.Web.Redis.StackExchangeClientConnection.Eval(String script, String[] keyArgs, Object[] valueArgs)
at Microsoft.Web.Redis.RedisConnectionWrapper.Set(ISessionStateItemCollection data, Int32 sessionTimeout)
at Microsoft.Web.Redis.RedisSessionStateProvider.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem)

Related

Sitecore Team Development Service (TDS) will not synchronise on new Windows 10 machine with VS2017

I have setup a new development machine on Windows 10 with VS2017 and Sitecore 8.2 rev 170728 and am experiencing issues with TDS.
The TDS projects load, the Sitecore connector installs and I can see the _DEV folder and related files in the website folder but when I try and synchronise or perform the test from the TDS project properties I get a 503 error.
Exception The HTTP service located at http://sitecore/_DEV/TdsService.asmx is unavailable. This could be because the service is too busy or because no endpoint was found listening at the specified address. Please ensure that the address is correct and try accessing the service again later. (ServerTooBusyException):
Server stack trace:
at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason)
at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at HedgehogDevelopment.SitecoreProject.VSIP.SitecoreConnector.TdsServiceSoap.Version(VersionRequest request)
at HedgehogDevelopment.SitecoreProject.VSIP.SitecoreConnector.TdsServiceSoapClient.HedgehogDevelopment.SitecoreProject.VSIP.SitecoreConnector.TdsServiceSoap.Version(VersionRequest request)
at HedgehogDevelopment.SitecoreProject.VSIP.SitecoreConnector.TdsServiceSoapClient.Version()
at HedgehogDevelopment.SitecoreProject.VSIP.Utils.Support.CheckClientVersion(TdsServiceSoapClient client, SitecoreProjectNode project, Boolean retry)
Inner Exception Details:
Exception The remote server returned an error: (503) Server Unavailable. (WebException):
at System.Net.HttpWebRequest.GetResponse()
at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
Exception An error occured getting the TDS service version. Please review the Sitecore logs and/or windows events on the server to determine the problem. (ApplicationException):
at HedgehogDevelopment.SitecoreProject.VSIP.Utils.Support.GetTdsServiceSoapClient(String sitecoreWebUrl, String sitecoreAccessGuid, SitecoreProjectNode project, Boolean checkVersion)
at HedgehogDevelopment.SitecoreProject.VSIP.Utils.Support.GetTdsServiceSoapClient(SitecoreProjectNode project, Boolean checkVersion)
at HedgehogDevelopment.SitecoreProject.VSIP.ToolWindows.SyncWithSitecoreToolWindow.SyncItemsWithSitecore(SitecoreProjectNode project, SitecoreItemNode syncRoot, Boolean sycnChildren)
at HedgehogDevelopment.SitecoreProject.VSIP.SitecoreProjectPackage.ShowSitecoreSyncWindow(SitecoreProjectNode project, SitecoreItemNode selectedItem, Boolean syncChildren)
Everything is set correctly and the website is definitely up and running.
When I access the TdsService.asmx page directly I get an error trying to verify the version number
System.Web.Services.Protocols.SoapException: Service is Locked ---> HedgehogDevelopment.Padlock.PadlockException: Service is Locked
at HedgehogDevelopment.Padlock.Locking.CheckLock()
at HedgehogDevelopment.SitecoreProject.Service.TdsService.Version()
--- End of inner exception stack trace ---
at HedgehogDevelopment.SitecoreProject.Service.TdsService.Version()
I have tried uninstalling and reinstalling the Sitecore connector, even tried different versions of TDS, different VS projects and even pointing TDS at a clean Sitecore installation.
I am not seeing any information in windows event logs, IIS logs or the Sitecore logs related to this error.
I have granted full control access to the website folder, the folder the code is in and the inetpub folder.

Frequent HttpRequestException when talking to Cosmos DB from Web App

We have a web service (Azure App Service) deployed to Azure that talks to our Azure Cosmos DB via the standard C# SDK for Cosmos DB/Document DB.
Both - app service and Cosmos DB account/collections - are in the same resource group and in the same location in Azure.
For certain bulk operations where the web service performs a burst of requests to Cosmos DB we are frequently getting errors in the web service when talking to the database:
System.AggregateException: One or more errors occurred. ---> System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.InternalBind(EndPoint localEP)
at System.Net.Sockets.Socket.BeginConnectEx(EndPoint remoteEP, Boolean flowContext, AsyncCallback callback, Object state)
at System.Net.Sockets.Socket.UnsafeBeginConnect(EndPoint remoteEP, AsyncCallback callback, Object state)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
--- End of inner exception stack trace ---
[...]
at Microsoft.Azure.Documents.Query.ProxyDocumentQueryExecutionContext.<ExecuteNextAsync>d__0.MoveNext()<---
Each of our ApiController instances statically allocates a single repository class, which in turn fetches a IReliableReadWriteDocumentClient instance in its constructor and holds it for its entire lifetime via
IDocumentDbInitializer dbinit = new DocumentDbInitializer();
Client = dbinit.GetClient(endpointUrl, myAuthKey, connectionPolicy);
So in my understanding we should use only 2 Document DB clients for our 2 repositories in the entire web service.
Things we've tried so far:
throttle the requests on the client during the bulk operation to less than 3/s
reduce the clients ConnectionPolicy.MaxConnectionLimit from default (50) to 20
increase the apps ServicePointManager.DefaultConnectionLimit
None of these measures significantly reduced the number of exceptions we're experiencing.
Any suggestions how to avoid this error in the first place?
Additional Cosmos DB SDK functionality to tune/configure/adapt for our use case..?

Unable to launch task from a spring cloud data flow stream

I registered my task app in Spring Cloud Data Flow, created a definition for it and the status shows 'unknown'. I created the stream and trying to launch the task through task-sink and I get an error:
java.lang.IllegalStateException: failed to resolve MavenResource:
How to launch a task from the task-sink? Am I missing something? Any help is appreciated. Another question I have is how do I access the payload sent via TaskLaunchRequest in my task?
S1 http | step1: transformer-rabbit | log
S2 :S1.step1 > filter --expression=payload.contains('CUSTADDRMODRQ_V15') | task-processor | task-sink
task-sink is launching the task provided by the uri in the TaskLaunchRequest. It is looking for the resource as shown in the log
OUT Using manager EnhancedLocalRepositoryManager with priority 10.0 for /home/vcap/.m2/repository
OUT Using transporter HttpTransporter with priority 5.0 for https://repo.spring.io/libs-snapshot and finally failing.
The task is deployed in our repository and as mentioned I registered and created the definition for it as well.
This one is in cf environment and I am using SCDF server 1.0.0.M4.
In the application.properties for the task-sink i am providing maven.remote.repositories.snapshots.url=**
task create fis-ifx-event-task --definition "fis-event-task"
My goal is launching the task from the stream.
Thanks for the information. I am in fact using the BUILD-SNAPSHOT as I am unable to enable taks in 1.0.0M4 version. Here is the one I am using spring-cloud-dataflow-server-cloudfoundry-1.0.0.BUILD-20160808.144306-116. I am able to register and create task definitions. The status of the task definition is showing as 'unknown' even when I am using the sample task module provided by your team. But when I initiate the flow of the stream and when task-sink tries to launch the task, it is unable to find the maven resource. When I create the task definition, does the task module gets deployed? I don't see any app in Pivotal Apps Manager. As mentioned earlier, I provided maven.remote.repositories.snapshot.url in the application.properties file for the task-sink application. Another thing I observed is when I launch the task manually from dataflow shell it gives an error CF-UnprocessableEntity(10008): The request is semantically invalid: Unknown field(s): 'staging_disk_in_mb', 'staging_memory_in_mb' and also a message saying 'Source is empty'. Presently the task is supposed to print the timestamp and is not dependent on any input.
TaskProcessor code:
#EnableBinding(Processor.class)
#EnableConfigurationProperties(TaskProcessorProperties.class)
public class TaskProcessor {
#Autowired
private TaskProcessorProperties processorProperties;
public TaskProcessor() {
}
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#ELI(level = "info", eventType = ELIEventType.INBOUND)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<String, String>();
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(processorProperties.getUri(), null, properties, null);
return new GenericMessage<>(request);
}
}
TaskSink code:
#SpringBootApplication
#EnableTaskLauncher
#EnableBinding(Sink.class)
#EnableConfigurationProperties(TaskSinkProperties.class)
public class FisIfxEventTaskSinkApplication {
public static void main(String[] args) {
SpringApplication.run(FisIfxEventTaskSinkApplication.class, args);
}
}
I provided the stream I am using earlier in the post. Sink is receiving the TaskLaunchRequest with uri and payload as you can see here and unable to launch the task.
OUT registering [40, java.io.File] with serializer org.springframework.integration.codec.kryo.FileSerializer
2016-08-10T16:08:55.02-0600 [APP/0]
OUT Launching Task for the following resource TaskLaunchRequest{uri='maven://com.xxx:fis.ifx.event-task:jar:1.0-SNAPSHOT', commandlineArguments=[], environmentProperties={payload={"statusCode":0,"fisT
opic":"CustomerDataUpdated","payloadId":"CUSTADDRMODR``Q_V15","customerIds":[1597304]}}, deploymentProperties={}}
Before I begin, you have a number of questions here. In the future, it's better to break them up into multiple questions so that they are easier to find by other users and easier to answer. That being said:
A little context on the current state of things
In order to understand how things will work, it's important to understand the current state of things. The current releases of the software involved are:
Pivotal Cloud Foundry (PCF) - 1.7.12. This version is required for any task support.
Spring Cloud Task (SCT) - 1.0.2.RELEASE
Spring Cloud Data Flow CF (SCDF) - 1.0.0.BUILD-SNAPSHOT (current as of the date of this post).
Currently PCF 1.7.12+ has all the capabilities to run tasks. You can create v3 applications (the type of application used to launch a task), run it as a task, etc. However, the tooling around that functionality is not currently complete. There is no support for v3 applications in Apps Manager or the CLI. There is a plugin for the CLI that is more of a dev tool that can be used to help with some functions (it will show you logs, etc), but it is not fully functional and requires a specific version of the CLI to work [1]. This is one of the reasons that the task functionality within PCF is still considered experimental.
Spring Cloud Task is currently GA and supports all the functionality needed to effectively run tasks on CF. However, it's important to note that SCT doesn't handle orchestration so the actual launching of tasks on CF is the responsibility of either the user, or Spring Cloud Data Flow (the easier route).
Spring Cloud Data Flow's Cloud Foundry server implementation currently has functionality to launch tasks on PCF in the latest snapshots. We have validated this against 1.7.12 as well as the development branch of 1.8.
The task workflow within SCDF
Tasks are fundamentally different from stream applications within the context of SCDF. When you create a stream definition, you are given the option to deploy it. What this does is it actually downloads the Spring Boot über jars and deploys them to PCF as long running processes. If they go down, PCF, will relaunch them as expected, etc.
Tasks on the other hand, are not deployed. They are launched. The difference is that while you create a task definition, there is nothing deployed until you click launch. And when the task completes, the software is shut down and cleaned up. So while a stream definition may have states, it's really a one to one relationship between the definition and the deployed software. Where with a task, you can launch a task definition as many times as you want.
Your issues
Reading through your post, I see a few things that you are struggling with. Let me see if I can help:
Task Definitions within SCDF and launching them via a stream - When launching a task from a stream, the task registry within SCDF is not used. The sink expects the URL for the resource to be within the TaskLauchRequest.
Apps Manager and tasks - As mentioned above, there is no support for v3 applications in Apps Manager yet so you won't be able to see your tasks there.
Viewing the logs - In order to debug what's going wrong with launching your task on CF, you're going to want to view the logs. To do so, use the v3 CLI plugin mentioned above to view them. It's important to note that you can only tail live logs with the plugin, not view logs that have previously been rendered. Because of that, when testing, you'll want to tail the logs as soon as the app is created, before it's launched.
Error in SCDF Shell - The error you received from the SCDF shell (CF-UnprocessableEntity(10008):...) leads me to wonder if you have both the correct version of PCF (1.7.12+) and the correct version of the following other libraries:
spring-cloud-deployer-cloudfoundry - The latest snapshots
cf-java-client - 2.0.0.M10+
reactor-core - 3.0.0.RC1+
I hope this helps!
[1] https://github.com/cloudfoundry/v3-cli-plugin
Task support is not available in 1.0.0.M4 release of SCDF's CF-server. In this release, the task commands/REST-APIs should be disabled - see here. And for that reason, you wouldn't see any docs related to Tasks in the 1.0.0.M4 reference guide.
That said, the Task support is available/enabled in the BUILD-SNAPSHOT release. If you're locally building the CF-server and upon pushing it to CF, you could take advantage the task commands in the shell to create and launch task definitions.

Word Automation Service BatchGetSyncJobStatus fails when requesting security token

I'm running a SharePoint 2013 on-premise server on which I have deployed a simple WCF service as a farm solution. The service accepts simple Http post requests that contain single MS Word documents as payload and returns these files converted into PDFs.
The service is accessible via Http to anonymous users. The WordAutomationService is running as Administration user account of the SharePoint server.
The service class creates an new instance of the Microsoft.Office.Word.Server.Conversions.SyncConverter and passes the proxy of the SharePoint's running WordAutomationService into the constructor (together with some ConversionJobSettings). Finally it calls the Convert method on the SyncConverter with the input stream (the Word document) and output stream (the web response which will contain the resulting PDF document produced by the WordAutomationService).
When creating the SyncConverter I don't set the UserToken property because the access to the service is by anonymous users. According to the remarks here https://msdn.microsoft.com/en-us/library/microsoft.office.word.server.conversions.syncconverter.usertoken.aspx this seems to be fine:
The default value for this property is a null reference (Nothing in Visual Basic), which is anonymous.
This setup works fine for small Word documents with a couple of pages and returns the expected PDF files. But as soon as the execution time of the WordAutomationService on the SharePoint exceeds a certain time threshold (around 5 seconds) the service fails because it never returns (which leads to a read timeout on the client).
According to the logs it seems the reason for this is that after some time the synchronous conversion job moves the work into a background process:
Sync Stream job conversion takes too long. Don't wait anymore. Check its status later
It then polls the status of this job on a regular basis by calling ConversionServiceApplicationProxy.BatchGetSyncJobStatus. Unfortunately this call fails because internally it tries to create a new channel to talk to this process and for that asks for a security token. The SecurityTokenService however cannot complete the token request and throws an exception:
An unhandled exception has occurred. The security token request cannot be completed. System.InvalidOperationException: The security token request cannot be completed.
at Microsoft.SharePoint.SPSecurityContext.SecurityTokenForServiceContext(Uri contextUri)
at Microsoft.SharePoint.SPChannelFactoryOperations.InternalCreateChannelActingAsLoggedOnUser[TChannel](ChannelFactory`1 factory, EndpointAddress address, Uri via)
at Microsoft.Office.ConversionServices.Service.ConfigChannelFactory`1.CreateChannel(EndpointAddress address)
at Microsoft.Office.ConversionServices.Service.ConversionServiceApplicationProxy.GetChannel(Uri uri)
at Microsoft.Office.ConversionServices.Service.ConversionServiceApplicationProxy.ExecuteOnChannel(Uri endpointAddress, Action`1 action)
at Microsoft.Office.ConversionServices.Service.ConversionServiceApplicationProxy.BatchGetSyncJobStatus(ICollection`1 ucids, Uri endpointAddress)
at Microsoft.Office.ConversionServices.Service.BatchGetStatusPollingThread.Run()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart() StackTrace:
at onetnative.dll: (sig=37460b31-4453-4365-92f5-3a11c267be48|2|onetnative.pdb, offset=28F56) at onetnative.dll: (offset=15735)
I'm at a loss now how to get rid of the token issue so that the system can create the necessary channel to poll the conversion job status. Any help is highly appreciated. Thanks!
(I can't post the full log because it registers as spam)
I’ve found that, if you were to install SharePoint 2013 on a Domain Controller (a topology that Microsoft said is only good for development but not for production), then the default anonymous user in IIS (IUSR) will not work reliably, and any WCF solution which is accessed via an IIS site that has Anonymous Access configured to use the IUSR account will fail when it attempts to access Security Token Service.
In this case the most expedient solution is to reconfigure IIS to use another anonymous identity, namely the identity tied to the Application Pool.
For example if your site is called NameOfSite, you can run this in an elevated PowerShell:
Set-WebConfigurationProperty `
-Filter /system.WebServer/security/authentication/AnonymousAuthentication `
-Name username `
-Value "" `
-location "NameOfSite"
This solves the immediate problem at hand which is that SecurityTokenForServiceContext fails. However, if you’ve installed SharePoint 2013 on Windows 2012 R2 as a Domain Controller, then it is not over: WordServerWorker actually will not start in this configuration.
I can also confirm, however, that if you were to install SharePoint 2013 on a standalone server (with <Setting Id="SERVERROLE" Value="SINGLESERVER"/> role in the unattended config file), then the entire solution works end-to-end, and WordServerWorker will actually start properly.
Previously, the most relevant (and unanswered) question on this must be this MSDN posting, “The security token request cannot be completed”. I would assume that in that case, the service was only in a meta-stable state, and one of the IIS workers would have previously obtained credentials via NTLM during local testing.
Usually when sharepoint service applications interact with each other, these services maintaining current user context trough wcf calls by using service application framework (SAF). Its allows these services to use SPContext.Current, preserve correlation id between call in logs and so on. When this context is lost, services stopping being able to communicate each other. For example this happens if we have a code that starts a new thread but didn't setup user for newly created thread context.
According to your description your service is anonymous and didn't use SAF to maintain user context, but uses some services that requires existence of that context
The possible solution would be is to use SAF(which is tricky configured WCF in a nutshell) instead of plain WCF services with no authentication
Edit:
One more possible solution may be is wrap your code with RunWithElevatedPrivileges to make your services connects sharepoint with application pool identity

How do I test the Azure Webjobs SDK projects locally?

I want to be able to test an Azure WebJobs SDK project locally, before I actually publish it to Azure.
If I make a brand new Azure Web Jobs Project, I get some code that looks like this:
Program.cs:
// To learn more about Microsoft Azure WebJobs SDK, please see http://go.microsoft.com/fwlink/?LinkID=320976
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
Functions.cs:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
log.WriteLine(message);
}
}
I would like to get around to testing whether or not the QueueTrigger function is working properly, but I can't even get that far, because on host.RunAndBlock(); I get the following exception:
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Microsoft Azure WebJobs SDK Dashboard
connection string is missing or empty. The Microsoft Azure Storage
account connection string can be set in the following ways:
Set the connection string named 'AzureWebJobsDashboard' in the connectionStrings section of the .config file in the following format
, or
Set the environment variable named 'AzureWebJobsDashboard', or
Set corresponding property of JobHostConfiguration.
I ran the storage emulator, and set the Azure AzureWebJobsDashboard connection string like so:
<add name="AzureWebJobsDashboard" connectionString="UseDevelopmentStorage=true" />
but, when I did that, I'm getting a different error
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Failed to validate Microsoft Azure WebJobs SDK
Dashboard account. The Microsoft Azure Storage Emulator is not
supported, please use a Microsoft Azure Storage account hosted in
Microsoft Azure.
Is there any way to test my use of the WebJobs SDK locally?
WebJobs 2.0 now works using development storage (I'm using v2.0.0-beta2).
Note that latency in general and Blob triggers in particular are currently far better than you can get in production. Design with care.
If you want to test the WebJobs SDK locally, you need to set up a storage account in Azure. You can't test it against the Azure Emulator. That's what that error is telling you.
Failed to validate Microsoft Azure WebJobs SDK Dashboard account. The Microsoft Azure Storage Emulator is not supported, please use a Microsoft Azure Storage account hosted in Microsoft Azure.
So to answer your question, you can create a storage account in Azure using the portal, and then set up your connection string in the app.config of your Console Application. Then just drop a message to the queue and run the Console Application locally and it will pick it up (assuming you're trying to interact with the queue obviously).
Make sure that you replace the [QueueTrigger("queue")] "queue" with the name of the queue you want to poll.
Hope this helps