Attaching gdb (C++ debugger) to remote python process in VSCode - gdb

I need to debug a C++ library function invoked by a Python module in VSCode on a remote server.
I'm using VSCode Remote Development extensions, and debugging C++ code using gdb inside vscode on the remote server works fine, as does debugging Python on the remote.
I've tried both attaching to a paused (waiting for input) python process started from the terminal, and attaching to a python process at a breakpoint launched using the python (remote) debugger in vscode.
In both cases, I attach to the process ID using this launch.json config:
{
"name": "GDB Attach proc 0",
"type": "cppdbg",
"request": "attach",
"program": "/home/nfs/mharris/anaconda3/envs/cudf_dev/bin/python",
"processId": "${command:pickProcess}",
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"text": "-enable-pretty-printing",
"description": "enable pretty printing",
"ignoreFailures": true
},
{
"text": "handle SIGPIPE nostop noprint pass",
"description": "ignore SIGPIPE",
"ignoreFailures": true
}
]
},
},
It prints the following in the gdb terminal window:
==== AUTHENTICATING FOR org.freedesktop.policykit.exec ===
Authentication is needed to run `/usr/bin/gdb' as the super user
Authenticating as: aseadmin,,, (aseadmin)
Password: [1] + Stopped (tty output) /usr/bin/pkexec /usr/bin/gdb --interpreter=mi --tty=${DbgTerm} 0</tmp/Microsoft-MIEngine-In-bydexniu.cf5 1>/tmp/Microsoft-MIEngine-Out-mo618uzc.30l
You have stopped jobs.
I believe aseadmin is an admin user on the server. I don't know why it's trying to run gdb as the superuser. I'm connected to the remote using VSCode remote - SSH using ssh key authentication (my username is not aseadmin, so I'm not sure why it's using that).
I'm running vscode-insider (Version: 1.37.0-insider) with the remote development extensions (0.15.0), on Ubuntu 16.04 LTS.

Related

Both Dockerize and be able to debug a Django app using vscode

Is it possible to both Dockerize a Django app and still be able to debug it using Visual Studio Code's debugging tool? If yes, how? E.g, using docker-compose to run Django app, postgres, and a redis instance and be able to debug the Django app via Visual Studio Code.
Yes, this is possible.
I've done it with a NestJs app and should be a similar setup.
Expose a specific port on the Django app service in the compose file first.
Create the launch.json file the following configuration, then replace <port-exposed-on-container> and <directory-on-container> with real values.
{
"version": "0.2.0",
"configurations":
[
{
"name": "Docker: Attach to Node",
"type": "node",
"request": "attach",
"port": <port-exposed-on-container>,
"address": "localhost",
"localRoot": "${workspaceFolder}",
"remoteRoot": "/<directory-on-container>",
"protocol": "inspector",
"restart": true
},
]
}

Django debugger does not start by VS Code launch.json

Until recently my Django projects would debug fine using launch.json file. But it stopped working today and I have no idea about it.
Changes made by me in system were:
Clean-Installing Windows
Installing python in default path just for current user and not all users
Problem Description:
As soon as I click F5 key the debugger starts for some milliseconds and stops automatically without any prompt. As it does not shows any error I tried to log errors by nothing in it. It does not even load terminal to run commands.
The added configuration in launch.json file is as below:
{
"configurations": [
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}\\manage.py",
"args": [
"runserver"
],
"django": true,
"justMyCode": true,
"logToFile": true,
"console": "integratedTerminal",
"cwd": "${workspaceFolder}",
"host": "localhost"
}
]
}
Can someone help to troubleshoot this, I have already tried:
Creating new projects
Reinstalling Python
Creating new environment
Resetting VS Code Sync Settings
Run other dubug configurations {they are working fine)
Changing django debug configuration
My current options are:
Research more (Already spent a hours would take more)
Wait for solution by someone
Clean Install Windows and all software's (Would be like BhramaAstra)
Posting answer within few minutes of posting question seems weird but I got a solution that could help to get perfect answer to my problem.
The problem was the python environment path: I created environment in documents folder of C drive of current non-admin user hoping of no issues since python is installed for just the current user in default path. But as soon as created new environment in current user directory the debugger started working normally.
The issue is related to permissions and file paths, hope this helps to get solution to new questions generated:
Why can't I use Documents folder for creating a python environment?
Is there some other solution to my actual problem?
Adding images for further details:
Debugger not working from this environment -
Debugger working as usual if environment created here -

Unable to Debug Locally when Using AWS SAM CLI, CDK, and Lambda Layers

I am unable to find example or good documentation on how to debug functions locally when building Lambda functions using the SAM CLI, AWS CDK, and Lambda Layers.
When building an RestApi and simple Lambda function with the CDK and then attempting to debug locally with VSCode using a launch configuration like:
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "hello:app.handler (nodejs12.x)",
"invokeTarget": {
"target": "code",
"projectRoot": "${workspaceFolder}/infrastructure/handlers/hello",
"lambdaHandler": "app.handler"
},
"lambda": {
"runtime": "nodejs12.x",
"payload": {},
"environmentVariables": {}
}
}
the import statements referencing modules from any layer throw an error such as:
{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module '/opt/nodejs/utils'"
Steps to reproduce:
Clone this repo: https://github.com/swizzmagik/sam-lambda-layer-test
Run npm install
Run npm run build
Run npm run api
Observe that the hello function properly resolves the layer references and works
fine when called using curl or postman
Open handlers/hello/app.ts
and try to debug in VSCode by opening the hello.ts file and hitting
F5 Notice that the debugger starts but is unable to import the
module and fails on this line import { buildResponseHeaders,
handleError } from "/opt/nodejs/utils";

AWS SAM VSCode ptvsd debugging not using breakpoints

I'm trying to follow the instructions I found here for debugging a Python SAM application in VS Code
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging-python.html
I'm not sure why they don't use sam build in the expamle, and point to .aws-sam/build but that's what I'm attempting.
My launch.json looks like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "SAM CLI Python debug test",
"type": "python",
"request": "attach",
"port": 5890,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/.aws-sam/build",
"remoteRoot": "/var/task"
}
]
}
]
}
I'm triggering the Lambda's directly for now and so I'm invoking them like this:
sam local invoke -d 5890
I'm then putting a breakpoint at the beginning of the Lambda I find in the build folder, but when I start the debugger in VS Code it executes the Lambda without stopping at the breakpoint.
I created a GitHub repo with the test project I'm using and description of how I'm using it.
https://github.com/rupe120/aws-sam-python-debugging-test
Could someone help point me at what I'm missing in my setup?
So, the recommended way to do this is with the AWS Toolkit extension.
https://github.com/awslabs/aws-sam-cli/issues/1926#issuecomment-616600799
So the docs suggest using a localRoot of "${workspaceFolder}/hello_world/build" (assuming one is using the Hello World template). However, it only works when removing the build at the end of the path:
"localRoot": "${workspaceFolder}/hello_world"
This way, I got it to work without the AWS Toolkit.

Running Spark on AWS EMR, how to run driver on master node?

It seems that by default EMR deploys the Spark driver to one of the CORE nodes, resulting in the MASTER node being virtually un-utilized. Is it possible to run the driver program on the MASTER node instead? I have experimented with the --deploy-mode arguments to no avail.
Here is my instance groups JSON definition:
[
{
"InstanceGroupType": "MASTER",
"InstanceCount": 1,
"InstanceType": "m3.xlarge",
"Name": "Spark Master"
},
{
"InstanceGroupType": "CORE",
"InstanceCount": 3,
"InstanceType": "m3.xlarge",
"Name": "Spark Executors"
}
]
Here is my configurations JSON definition:
[
{
"Classification": "spark",
"Properties": {
"maximizeResourceAllocation": "true"
},
"Configurations": []
},
{
"Classification": "spark-env",
"Properties": {
},
"Configurations": [
{
"Classification": "export",
"Properties": {
},
"Configurations": [
]
}
]
}
]
Here is my steps JSON definition:
[
{
"Name": "example",
"Type": "SPARK",
"Args": [
"--class", "com.name.of.Class",
"/home/hadoop/myjar-assembly-1.0.jar"
],
"ActionOnFailure": "TERMINATE_CLUSTER"
}
]
I am using aws emr create-cluster with --release-label emr-4.3.0.
Setting the location of the driver
With spark-submit, the flag --deploy-mode can be used to select the location of the driver.
Submitting applications in client mode is advantageous when you are debugging and wish to quickly see the output of your application. For applications in production, the best practice is to run the application in cluster mode. This mode offers you a guarantee that the driver is always available during application execution. However, if you do use client mode and you submit applications from outside your EMR cluster (such as locally, on a laptop), keep in mind that the driver is running outside your EMR cluster and there will be higher latency for driver-executor communication.
https://blogs.aws.amazon.com/bigdata/post/Tx578UTQUV7LRP/Submitting-User-Applications-with-spark-submit
I don't think it is a waste. When running Spark on EMR, the master node will run Yarn RM, Livy Server, and maybe other applications you selected. And if you run in client mode, the majority of the driver program will run on the master node as well.
Note that the driver program could be heavier than the tasks on executors, e.g. collecting all results from all executors, in which case you need to allocate enough resources to your master node if it is where the driver program is running.