I'm studing some Django, mainly DRF, but some times what i do goes different of what my instructor does. Actually, i'm now facing a problem when I try to debug my code with the VSCode debugger, as he does in the classes, where, when he starts the application with the debugger, it starts normally, but when he makes a requests that passes by a certain view or serializer, the application stops at the breakpoint and wait his commands, like a normal debug. When I try the same, as the application starts, it first stops at the breakpoints, while Django is performing the system checks, and then does not stops when the request passes by the view, for example. The only step he made to configure the debugger was creating the launch.json file the VSCode recommends, here is an example of mine:
{
// Use o IntelliSense para saber mais sobre os atributos possíveis.
// Focalizar para exibir as descrições dos atributos existentes.
// Para obter mais informações, acesse: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"stopOnEntry": false,
"program": "${workspaceFolder}\\src\\manage.py",
"args": [
"runserver",
],
"django": true,
"justMyCode": false
}
],
}
with these settings, the application starts, stops at the breakpoints when checking system, and then does not stops anymore, as i said above. Probably it's a dumb question, but what's wrong? here's an example of what happens when it starts:
enter image description here
The only thing I tryed was to add the
"stopOnEntry": false,
at the launch.json, but it did nothing
Related
As described in the documentation, since 4.1 the default behavior for template loading changed drastically.
If I understand it right, until 4.0 it worked like this:
With DEBUG enabled, the templates are loaded in every request, therefore if you keep making changes and reloading while working on a template, you always see the latest version.
With DEBUG disabled, the templates are cached when initializing the application, therefore you can only see changes in your templates if you also restart the application.
That way, the template caching was seamlessly enabled in production which is great.
Now this ticket proposal was included and, if I get it correctly, the template loading method must be specified and it's not anymore tied to DEBUG setting, AND, by default are cached.
We want the original behavior so the frontend developer can see the changes without having to restart the app, and we also want the production deployment to have the caching enabled, so we did this:
develop_loaders = [
"django.template.loaders.filesystem.Loader",
"django.template.loaders.app_directories.Loader",
]
production_loaders = [
("django.template.loaders.cached.Loader", [
"django.template.loaders.filesystem.Loader",
"django.template.loaders.app_directories.Loader",
"path.to.custom.Loader",
])
]
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [
"templates",
],
"OPTIONS": {
"context_processors": [
"maintenance_mode.context_processors.maintenance_mode",
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
"wagtail.contrib.settings.context_processors.settings",
],
"loaders": develop_loaders if DEBUG else production_loaders,
},
},
]
Which works, but I wonder, am I getting the situation correctly? Do you think this is a solid solution?.
Also it took me a while because when I read the changelog for 4.1 I didn't grasp that this change would have this impact (we never specified any loader in settings before) so we expected the default behavior to be respected, which led to looking at gunicorn and docker as the first suspicious culprits, etc... so I thought that this question might be useful for other people in a similar situation.
The problem is not with the cached loader but with the signal handling of your OS. The cached loader has an reset method which is called on file_changed, this way you can benefit even when debugging of the cached templates.
Do you use runserver_plus? There is an issue for it: https://github.com/django-extensions/django-extensions/issues/1766
I do not experience the issue with normal runserver command.
I am trying to go to a specific state directly or I want to have the flexibility to start from beginning of the workflow. I am not able to pass the next state as dynamic variable. How can I achieve this please?
workflow:
{
{
"Comment": "A description of my state machine",
"StartAt": "gotochoice",
"States": {
"gotochoice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$$.Execution.Input.initial",
"BooleanEquals": true,
"Next": "$$.Execution.Input.startState"
}
],
"Default": "defaultState"
}
},
//Other states
}
}
From above workflow I want to specify the start state dynamically. But "Next" is not accepting the variable from executionContext. Any workaround or any suggestions to fix this issue please?
Basically i just want to start my state machine from a certain failed state. I know below can be done, but i don’t want to create a new state machine for that.Any other alternative please.
https://aws.amazon.com/blogs/compute/resume-aws-step-functions-from-any-state/
Just in case if anyone is still looking for an answer, this is not possible at this stage. But may be in future according to aws support.
I am trying to do a very simple streaming dataflow as described here:
https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-streaming
I was able to get it work briefly with some manually typed in data, but now it isn't populating any more. I am dumping data into a blob directory, and the data is not showing up in the preview. I am getting notifications that the refresh is failing:
Streaming model definition is syntactically or semantically incorrect.
I keep dumping data into the directory, and nothing shows up in the preview. I've tried turning the dataflow on and off, makes no difference. Nothing shows up in Power BI, nothing shows up in the data preview. Nothing shows up in the input box, nothing shows up in the output box.
The data is of the form:
[
{ "id": "1",
"amount": "3"},
{ "id": "2",
"amount": "4"}
]
Although it also fails with data of the form
{
"id": "1",
"amount: "3"
}
What would cause such an error message?
Im trying to do the following at AWS Step Functions:
IF ExampleState fails, do "Next":"Anotherlambda"
IF ExampleState completes successfull, end the execution.
How can i do that? The CHOICE state doesn't support ErrorEquals: States.TaskFailed.
In my case, when ExampleState fails, State Machine STOPS and gives me error, but i want to continue and catch some info from the error and save it with another lambda
Thanks!
All i wanted AWS Step Functions to do is, if a State succeeded, finish the execution, if fails, run another lambda. Like an IF / ELSE on programming.
Step Functions gives this easy to you as a CATCH block that only activates if catches an error and does what you want. Here the solution:
"StartAt": "ExampleLambda",
"States": {
"ExampleLambda": {
"Type": "Task",
"Resource": "xxx:function:ExampleLambda",
"Catch": [
{
"ErrorEquals":["States.TaskFailed"],
"Next": "SendToErrorQueue"
}
],
"End": true
}
I can create a SQL DW using ARM no problem. However, the portal supports an option of also installing a sample database - e.g. AdventureWorksDW. How can I do the equivalent using an ARM script?
BTW, I clicked on "automation options" on the portal add it shows an ARM script with an extension that probably is the piece that installs the sample database, but it asks for some parameters (e.g. storageKey, storageUri) that I don't know.
Here's what I think is the relevant portion of the ARM JSON:
"name": "PolybaseImport",
"type": "extensions",
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[concat('Microsoft.Sql/servers/', parameters('serverName'), '/databases/', parameters('databaseName'))]"
],
"properties": {
"storageKeyType": "[parameters('storageKeyType')]",
"storageKey": "[parameters('storageKey')]",
"storageUri": "[parameters('storageUri')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"operationMode": "PolybaseImport"
}
More specifically, looking at the ARM deploy script generated from the portal, here are the key elements that I need to know in order to auto deploy using my own ARM script:
…
"storageKey": {
"value": null <- without knowing this, I can’t deploy.
},
"storageKeyType": {
"value": "SharedAccessKey"
},
"storageUri": {
"value": https://sqldwsamplesdefault.blob.core.windows.net/adventureworksdw/AdventureWorksDWPolybaseImport/Manifest.xml <- this is not a public blob, so can’t look at it
},
…
AFAIK that's currently not possible. The portal kicks off a workflow that provisions the new DW resources, generates the sample DW schema then loads data. The sample is stored in a non-public blob so you won't be able to access it.
I don't think it's hard to make it available publicly but it does take some work so perhaps you should add a suggestion here: https://feedback.azure.com/forums/307516-sql-data-warehouse