AWS hosted .Net Core 2.0 website won't start when using anything but default app.UseStaticFiles() - amazon-web-services

So I've been trying to serve images from my Uploads directory from the root level of my AWS hosting using the below static files config in my .Net Core 2.1 app. It works locally but when I deploy to AWS it won't even start the application with the error below.
app.UseStaticFiles();
app.UseStaticFiles(new StaticFileOptions
{
FileProvider = new PhysicalFileProvider(Path.Combine(env.ContentRootPath, "Uploads")),
RequestPath = new PathString("/Uploads")
});
The AWS error is just the critical crash on startup below:
An error occurred while starting the application.
.NET Core 4.6.26814.03 X64 v4.0.0.0 | Microsoft.AspNetCore.Hosting version 2.1.1-rtm-30846 | Microsoft Windows 6.1.7601 S | Need help?

In case anyone else finds this it seems that AWS doesn't allow files below the root level of deployed websites. To get around the startup crash I've just settled for the default below despite it being a bit of a security issue.
app.UseStaticFiles();

Related

Unity UWP build can't find AWSCredentials in the the shared credentials file in the default location

I'm making some tests (create a bucket, upload a file, list buckets etc) with Unity-UWP and Amazon AWS.
When I play it in the Editor, every thing works fine but when I try to find my AWS credentials in the UWP build it can't find them. This is my code:
void Start()
{
chain = new CredentialProfileStoreChain();
if (chain.TryGetAWSCredentials("default", out awsCredentials))
{
client = new AmazonS3Client(awsCredentials);
Debug.Log("Credential OK");
}
else
{
Debug.Log("Credential NO OK");
}
}
So, every time, I got "Credential NO OK" and can't continue with the tests.
Could it be because UWP is very sandboxed and the user is not giving explicit access to the "credentials" and "config" file in the defaul location?
If so, what could be the solution or workaround. I wouldn't like to use my credentials in the code.
Unity Version: 2020.3.3f1
AWS SDK: Version 3.7.38 of the netstandard2.0 DLLs
Build Environment: Visual Studio 2019
Build Type: Executable Only (for fast iteration and local test)
Build Configuration: Release
Target Architecture: x64
Test Environment: UWP running on Windows 10 Desktop, build 19041.985
Api Compatibility Level: .NET Standard 2.0
Also I added a "link.xml" file for preserving my AWS .dlls, and "internet client" anabled in both "player settings" and "appxmanifest"
thank you in advance.
As a workaround for this situation, I set my credentials in the Environment variables. As this post suggest From where and in what order does the AWS .NET SDK load credentials?, the Environment variables is one option to get the aws credentials without hard coding them.
You can lear How to configure the Environment Variables here from the Amazon docs.
This doesn't explain why "TryGetAWSCredentials" from the default location doesn't work on UWP but allow me to continue with my test.
Finally my code for, e.g. list my buckets with this aproach:
public async void RegionCredentials()
{
Amazon.RegionEndpoint region = Amazon.RegionEndpoint.GetBySystemName("us-east-1");
using (var client3 = new AmazonS3Client(region))
{
var response = await client3.ListBucketsAsync();
Debug.Log("Region OK");
Debug.Log("Region - Number of buckets: " + response.Buckets.Count);
foreach (S3Bucket bucket in response.Buckets)
{
Debug.Log("Region - You own Bucket with name: " + bucket.BucketName);
}
}
}

google api python client keeps using an old version of my local app engine endpoints

I have two python projects running locally:
A cloud endpoints python project using the latest App Engine version.
A client project which consumes the endpoint functions using the latest google-api-python-client (v 1.5.1).
Everything was fine until I renamed one endpoint's function from:
#endpoints.method(MyRequest, MyResponse, path = "save_ocupation", http_method='POST', name = "save_ocupation")
def save_ocupation(self, request):
[code here]
To:
#endpoints.method(MyRequest, MyResponse, path = "save_occupation", http_method='POST', name = "save_occupation")
def save_occupation(self, request):
[code here]
Looking at the local console (http://localhost:8080/_ah/api/explorer) I see the correct function name.
However, by executing the client project that invokes the endpoint, it keeps saying that the new endpoint function does not exist. I verified this using the ipython shell: The dynamically-generated python code for invoking the Resource has the old function name despite restarting both the server and client dozens of times.
How can I force the api client to get always the latest endpoint api document?
Help is appreciated.
Just after posting the question, I resumed my Ubuntu PC and started Eclipse and the python projects from scratch and now everything works as expected. This sounds like a kind of a http client cache, or a stale python process, which prevented from getting the latest discovery document and generating the corresponding resource code.
This is odd as I have tested running these projects outside and inside Eclipse without success. But I prefer documenting this just in case someone else has this issue.

How do I test the Azure Webjobs SDK projects locally?

I want to be able to test an Azure WebJobs SDK project locally, before I actually publish it to Azure.
If I make a brand new Azure Web Jobs Project, I get some code that looks like this:
Program.cs:
// To learn more about Microsoft Azure WebJobs SDK, please see http://go.microsoft.com/fwlink/?LinkID=320976
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
Functions.cs:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
log.WriteLine(message);
}
}
I would like to get around to testing whether or not the QueueTrigger function is working properly, but I can't even get that far, because on host.RunAndBlock(); I get the following exception:
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Microsoft Azure WebJobs SDK Dashboard
connection string is missing or empty. The Microsoft Azure Storage
account connection string can be set in the following ways:
Set the connection string named 'AzureWebJobsDashboard' in the connectionStrings section of the .config file in the following format
, or
Set the environment variable named 'AzureWebJobsDashboard', or
Set corresponding property of JobHostConfiguration.
I ran the storage emulator, and set the Azure AzureWebJobsDashboard connection string like so:
<add name="AzureWebJobsDashboard" connectionString="UseDevelopmentStorage=true" />
but, when I did that, I'm getting a different error
An unhandled exception of type 'System.InvalidOperationException'
occurred in mscorlib.dll
Additional information: Failed to validate Microsoft Azure WebJobs SDK
Dashboard account. The Microsoft Azure Storage Emulator is not
supported, please use a Microsoft Azure Storage account hosted in
Microsoft Azure.
Is there any way to test my use of the WebJobs SDK locally?
WebJobs 2.0 now works using development storage (I'm using v2.0.0-beta2).
Note that latency in general and Blob triggers in particular are currently far better than you can get in production. Design with care.
If you want to test the WebJobs SDK locally, you need to set up a storage account in Azure. You can't test it against the Azure Emulator. That's what that error is telling you.
Failed to validate Microsoft Azure WebJobs SDK Dashboard account. The Microsoft Azure Storage Emulator is not supported, please use a Microsoft Azure Storage account hosted in Microsoft Azure.
So to answer your question, you can create a storage account in Azure using the portal, and then set up your connection string in the app.config of your Console Application. Then just drop a message to the queue and run the Console Application locally and it will pick it up (assuming you're trying to interact with the queue obviously).
Make sure that you replace the [QueueTrigger("queue")] "queue" with the name of the queue you want to poll.
Hope this helps

Web Service Data Stage

I get the following error:Service Invocation Exeption i am working with Version 8.7 IBM InfoSphere DataStage and QualityStage Designer and using a server job and there, i have 1 sequential file, web service, sequential file.
Any idea what could be the reason of this error ?
Make sure you have chosen proper DataStage job type and your stage that operates on web service is configured properly.
You should also check the DataStage logs to get more information about root cause of the error.

Error when deploying applications to cloud foundry using cloud bees plugin

I have integrated my Cloud Foundry account with Cloud Bees as mentioned in the url -
http://docs.cloudfoundry.com/docs/dotcom/integration/cloudbees/
and trying to deploy few sample applications from github.
Build was successful every time but when I went for app-deployment using this plugin, it gave one exception (one particular exception for 2-3 applications I have tried).
[INFO] Deployment done in 1.2 sec
[cloudbees-deployer] Deploying as (jenkins) to the svcnvghi293 account
[cloudbees-deployer] Deploying null
com.cloudbees.plugins.deployer.exceptions.DeployException: Could not create DeployEvent
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl.createEvent(RunEngineImpl.java:132)
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl.createEvent(RunEngineImpl.java:51)
at com.cloudbees.plugins.deployer.engines.Engine.perform(Engine.java:82)
at com.cloudbees.plugins.deployer.DeployPublisher.perform(DeployPublisher.java:95)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:728)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:703)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:994)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:650)
at hudson.model.Run.execute(Run.java:1530)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:477)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:237)
Caused by: java.lang.NullPointerException
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl$EventImpl.<init>(RunEngineImpl.java:208)
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl.createEvent(RunEngineImpl.java:124)
... 12 more
Build step 'Deploy applications' marked build as failure
Finished: FAILURE
does anyone have any idea about this ?
Thanks in advance.
After a bit of digging I figured out which account you have.
The issue is that you had left the CloudBees RUN#Cloud host service in the list of host services to deploy to but you had not provided a complete configuration for it, e.g. see the "Application Id cannot be empty" red error text in this screenshot
I have removed this host section and saved your hellospring job. Build 8 shows a successful deployment.