Until recently my Django projects would debug fine using launch.json file. But it stopped working today and I have no idea about it.
Changes made by me in system were:
Clean-Installing Windows
Installing python in default path just for current user and not all users
Problem Description:
As soon as I click F5 key the debugger starts for some milliseconds and stops automatically without any prompt. As it does not shows any error I tried to log errors by nothing in it. It does not even load terminal to run commands.
The added configuration in launch.json file is as below:
{
"configurations": [
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}\\manage.py",
"args": [
"runserver"
],
"django": true,
"justMyCode": true,
"logToFile": true,
"console": "integratedTerminal",
"cwd": "${workspaceFolder}",
"host": "localhost"
}
]
}
Can someone help to troubleshoot this, I have already tried:
Creating new projects
Reinstalling Python
Creating new environment
Resetting VS Code Sync Settings
Run other dubug configurations {they are working fine)
Changing django debug configuration
My current options are:
Research more (Already spent a hours would take more)
Wait for solution by someone
Clean Install Windows and all software's (Would be like BhramaAstra)
Posting answer within few minutes of posting question seems weird but I got a solution that could help to get perfect answer to my problem.
The problem was the python environment path: I created environment in documents folder of C drive of current non-admin user hoping of no issues since python is installed for just the current user in default path. But as soon as created new environment in current user directory the debugger started working normally.
The issue is related to permissions and file paths, hope this helps to get solution to new questions generated:
Why can't I use Documents folder for creating a python environment?
Is there some other solution to my actual problem?
Adding images for further details:
Debugger not working from this environment -
Debugger working as usual if environment created here -
I'm trying to follow the instructions I found here for debugging a Python SAM application in VS Code
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging-python.html
I'm not sure why they don't use sam build in the expamle, and point to .aws-sam/build but that's what I'm attempting.
My launch.json looks like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "SAM CLI Python debug test",
"type": "python",
"request": "attach",
"port": 5890,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/.aws-sam/build",
"remoteRoot": "/var/task"
}
]
}
]
}
I'm triggering the Lambda's directly for now and so I'm invoking them like this:
sam local invoke -d 5890
I'm then putting a breakpoint at the beginning of the Lambda I find in the build folder, but when I start the debugger in VS Code it executes the Lambda without stopping at the breakpoint.
I created a GitHub repo with the test project I'm using and description of how I'm using it.
https://github.com/rupe120/aws-sam-python-debugging-test
Could someone help point me at what I'm missing in my setup?
So, the recommended way to do this is with the AWS Toolkit extension.
https://github.com/awslabs/aws-sam-cli/issues/1926#issuecomment-616600799
So the docs suggest using a localRoot of "${workspaceFolder}/hello_world/build" (assuming one is using the Hello World template). However, it only works when removing the build at the end of the path:
"localRoot": "${workspaceFolder}/hello_world"
This way, I got it to work without the AWS Toolkit.
That is what I get when I follow the instructions at the official Docker tutorial here: tutorial link
I uploaded my Dockerrun.aws.json file and followed all other instructions.
The logs show nothing even when I click Request:
If anyone has a clue as to what I need to do, ie. why would not having a default VPC even matter here? I have only used my AWS account to set up Linux Machine EC2 instances for a Deep Learning nanodegree at Udacity in the past (briefly tried to set up a VPC just for practice but am sure I deleted/terminated everything when I found out that is not included in the free tier).
The author of the official tutorial forgot to add that you have to add the tag to the image name in the Dockerrun.aws.json file per below in gedit or other editor where :firsttry is the tag:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "hockeymonkey96/catnip:firsttry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
It works:
I am following this tutorial https://prakhar.me/docker-curriculum/ and I am trying to create and EBS component.
For Application version I am uploading a file called Dockerrun.aws.json with the following content:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myDockerHubId/catnip",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
However, I am getting this problem:
Error
Could not launch environment: Application version is unusable and cannot be used with an environment
Any idea why the configuration file is not good?
I'm pretty sure this image
myDockerHubId/catnip
Does not exist on dockerhub.
Make sure You try to use an existent docker image from the dockerhub.
I'm trying to deploy multiple node.js micro services on AWS beanstalk, and I want them to be deployed on the same instance. It's my first time to deploy multiple services, so there're some failures I need someone to help me out. So, I tried to package them in a docker container first. Meanwhile I'm using docker composer to manage the structure. It's up and running locally in my virtual machine, but when I deployed it on to beanstalk, I met a few problems.
What I know:
I know I have to choose to deploy as multi-container docker.
The best practice to manage multiple node.js services is using docker composer.
I need a dockerrun.aws.json for node.js app.
I need to create task definition for that ecs instance.
Where I have problems:
I can only find dockerrun.aws.json and task_definition.json
template for php, so I can't verify if my configuration for node.js
in those two json files are in correct shape.
It seems like docker-compose.yml, dockerrun.aws.json and task_definition.json are doing similar jobs. I must keep
task_definition, but do I still need dockerrun.aws.json?
I tried to run the task in ecs, but it stopped right away. How can I check the log for the task?
I got:
No ecs task definition (or empty definition file) found in environment
because my task will always stop immediately. If I can check the log, it will be much easier for me to do trouble shooting.
Here is my task_definition.json:
{
"requiresAttributes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:231440562752:task-definition/ComposerExample:1",
"status": "ACTIVE",
"revision": 1,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 100,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "webdata",
"readOnly": true
}
],
"name": "nginxexpressredisnodemon_nginx_1",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "nginxexpressredisnodemon_nginx",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": null,
"cpu": 99,
"privileged": null
}
],
"volumes": [
{
"host": {
"sourcePath": "/ecs/webdata"
},
"name": "webdata"
}
],
"family": "ComposerExample"
}
I had a similar problem and it turned out that I archived the containing folder directly in my Archive.zip file, thus giving me this structure in the Archive.zip file:
RootFolder
- Dockerrun.aws.json
- Other files...
It turned out that by archiving only the RootFolder's content (and not the folder itself), Amazon Beanstalk recognized the ECS Task Definition file.
Hope this helps.
For me, it was simply a case of ensuring the name of the file matched the exact casing as described in the AWS documentation.
dockerfile.aws.json had to be exactly Dockerfile.aws.json
Similar problem. What fixed it for me was using the CLI tools instead of zipping myself, just running eb deploy worked.
For me codecommit was no. Then after adding the Dockerrun.aws.json in git it works.
I got here due to the error. What my issue was is that I was deploying with a label using:
eb deploy --label MY_LABEL
What you need to do is deploy with ':
eb deploy --label 'MY_LABEL'
I've had this issue as well. For me the problem was that Dockerrun.aws.json wasn't added in git. eb deploy detects the presence of git.
I ran eb deploy --verbose to figure this out:
INFO: Getting version label from git with git-describe
INFO: creating zip using git archive HEAD
It further lists all the files that'll go in to the zip, Dockerrun.aws.json isn't there.
git status reports this:
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
Dockerrun.aws.json
nothing added to commit but untracked files present (use "git add" to track)
Adding the file to git and committing helped.
In my specific case I could just remove the .git directory in a scripted deploy.
In my case, I had not committed the Dockerrun.aws.json file after creating it, so using eb deploy failed with the same error.