I'm trying to use native debug (https://marketplace.visualstudio.com/items?itemName=webfreak.debug) extention in vscode to launch gdb on a remote server through ssh.
It's working when I'm connecting directly to remote sever :
{
"version": "0.2.0",
"configurations": [
{
"type": "gdb",
"request": "launch",
"name": "Launch Program (SSH)",
"target": "./appli",
"cwd": "${workspaceFolderBasename}",
"arguments": "",
"ssh": {
"host": "xxx.xx.xxx.xxx",
"cwd": "/home/username/project",
"keyfile": "/home/username/.ssh/id_rsa",
"user": "username"
},
"valuesFormatting": "parseText"
}
]
}
Is there any way to launch gdb on remote server through proxy ?
Let https://www.npmjs.com/package/ssh2 command for example use my .ssh/config file
I want to launch gdb on remote server yyy.yy.yyy.yyy through a proxy xxx.xx.xxx.xxx
It's working ! I changed the code of the webfreak.debug to handle ssh proxyCommand, using ssh2-promise : https://www.npmjs.com/package/ssh2-promise instead of ssh2
Related
I'm aware that when I set up my server with python manage.py runserver [my custom IP address] I can choose what IP and port will be chosen. But when I use VScode in debug mode I don't know where I can define the server other than 127.0.0.0.1:8000. Having 127.0.0.0.1:8000 I can access the page just by my pc (win 10).
I need to change the address, because I'm unable to open my page on my android phone (for testing with this debugger), but I don't know where and how.
I suspect that it can be somehow defined in launch.json, but I haven't found any information about that yet.
You can do that by giving the IP as args to launch.json.
#launch.json
{
"version": "1.0.0",
"configurations": [
{
"name": "Python: Current file",
"type": "python",
"request": "launch",
"program": "${file}",
"args": ["runserver", "192.168.1.0:8000"], //or whatever IP you want to use
"console": "integratedTerminal"
}
]
}
I need to do get the SYS_PTRACE kernel capability on my docker container. Here's the Docerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "some-bucket",
"Key": "somekey"
},
"Image": {
"Name": "somename",
"Update": "true"
},
"Ports":[
{
"HostPort": 80,
"ContainerPort": 80
},
a few more ports
]
}
Remember, this is Amazon Linux 2, which is a whole new distribution and EB platform. We're not using Docker Compose (wherein you could add that to the yml).
I tried just adding in the following section:
"linuxParameters": {
"capabilities": {
"add": ["SYS_PTRACE"]
}
}
It was simply ignored.
Thanks!
It seems to me, this setting is not supported in v1. When looking into the docs under section "Docker platform Configuration - without Docker Compose" [1], linuxParameters is not listed as part of "Valid keys and values for the Dockerrun.aws.json v1 file". You might have to switch to v2 by using multi container Docker. The docs for v2 state that "the container definition and volumes sections of Dockerrun.aws.json use the same formatting as the corresponding sections of an Amazon ECS task definition file". [2]
It looks like your code above would work in v2 because it is a valid task definition section, see [3].
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
[2] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
[3] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
I do have a flask application running on port 8000 because i have logstash running on port 5000.
app.run(debug=True, host='0.0.0.0', port=8000)
i can run my app successfully. but when i use VScode debugger it throws
OSError: [Errno 98] Address already in use
because the debugger tries to run my app on port 5000.
i tried editing .vscode/launch.json and set "port": 8000 inside configurations but the error still the same. how can i tell the VScode to run my app with debugger on another port?
Add args key to your debug configuration and set the port there:
https://code.visualstudio.com/docs/python/debugging#_set-configuration-options
{
"name": "Python: startup.py",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/startup.py",
"args" : ["run", "--port", "8000"]
}
I'm getting asymmetrical container discoverability with multicontainer docker on AWS. Namely, the first container can find the second, but the second cannot find the first.
I have a multicontainer docker deployment on AWS Elastic Beanstalk. Both containers are running Node servers using identical initial code, and are built with identical Dockerfiles. Everything is up to date.
Anonymized version of my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "firstContainer",
"image": "firstContainerImage",
"essential": true,
"memoryReservation":196,
"links":[
"secondContainer",
"redis"
],
"portMappings":[
{
"hostPort":80,
"containerPort":8080
}
]
},
{
"name": "secondContainer",
"image": "secondContainerImage",
"essential": true,
"memoryReservation":196,
"environment":
"links":[
"redis"
]
},
{
"name": "redis",
"image": "redis:4.0-alpine",
"essential": true,
"memoryReservation":128
}
]
}
The firstContainer proxies a subset of requests to secondContainer on port 8080, via the address http://secondContainer:8080, which works completely fine. However, if I try to send a request the other way, from secondContainer to http://firstContainer:8080, I get a "Bad Address" error of one sort or another. This is true both from within the servers running on these containers, and directly from the containers themselves using wget. It's also true when trying different exposed ports.
If I add "firstContainer" to the "links" field of the second container's Dockerrun file, I get an error.
My local setup, using docker-compose, does not have this problem at all.
Anyone know what the cause of this is? How can I get symmetrical discoverability on an AWS multicontainer deployment?
I got a response from AWS support on the topic.
The links are indeed one-way, which is an unfortunate limitation. They recommended taking one of two approaches:
Use a shared filesystem and write the IP addresses of the containers to a file, which could then be used by your application to access the containers.
Use AWS Faragate service and use ECS Service Discovery service which lets you automatically create DNS records for the tasks and make them discoverable within your VPC.
I opted for a 3rd approach, which was to have the container that can discover the rest send out pings to inform the others of its docker-network IP address.
I'm exploring another option which is a mix of links, port mappings and extraHosts.
{
"name": "grafana",
"image": "docker.pkg.github.com/safecast/reporting2/grafana:latest",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"links": [
"renderer"
],
"mountPoints": [
{
"sourceVolume": "grafana",
"containerPath": "/etc/grafana",
"readOnly": true
}
]
},
{
"name": "renderer",
"image": "grafana/grafana-image-renderer:2.0.0",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081
}
],
"mountPoints": [],
"extraHosts": [
{
"hostname": "grafana",
"ipAddress": "172.17.0.1"
}
]
}
This allows the grafana to resolve renderer via links as usual, but the renderer container resolves grafana to the host IP (172.17.0.1 the default docker bridge gateway) which has port 3000 bound back to the grafana port.
So far it seems to work. The portMappings on renderer might not be required, but I'm still working out all the kinks.
I have two microservices images, a Go Rest API and a React frontend which are both inside my AWS ECR. I'll be using Elastic Beanstalk. Now since I believe they're on same machine so I configured the React app to fetch data from the API on localhost:8080. Below are the Dockerfiles for both. They worked in my dev environment so I pushed them to my ECR.
Dockerfile for Golang Rest API
FROM golang
ADD . /go/src/vsts/project/gorestapi
WORKDIR /go/src/vsts/project/gorestapi
RUN go get github.com/golang/dep/cmd/dep
RUN dep ensure
RUN go install .
ENTRYPOINT /go/bin/gorestapi
EXPOSE 8080
Dockerfile for React Frontend App
FROM node:8.4.0-alpine
WORKDIR /usr/src/app
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm i -g serve
CMD serve -s build/app -p 3000
EXPOSE 3000
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
I don't know if volume still needs to be declared, or the mountpoints, all I know is I need to put there the version of the dockerrun json to v2 and the name, image and portsmapping in container Definition. Most of the samples are confusing enough since none of them shows an app from private repo and they have those volumes, mountpoints, links, that I don't really understand how to use. I tried the one below but it did not work
Edit: (I changed Dockerrun.aws.json expecting Volume Host SourcePath to be the path in my machine. Please correct if I'm wrong)
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "webapp",
"host": {
"sourcePath": "/webapp"
}
},
{
"name": "gorestapi",
"host": {
"sourcePath": "/gorestapi"
}
}
],
"containerDefinitions": [
{
"name": "gorestapi",
"image": "<acctId>.dkr.ecr.us-east-1.amazonaws.com/dev/gorestapi:latest",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"links": [
"webapp"
],
"mountPoints": [
{
"sourceVolume": "gorestapi",
"containerPath": "/go/src/vsts/project/gorestapi"
}
]
},
{
"name": "webapp",
"image": "<acctId>.dkr.ecr.us-east-1.amazonaws.com/dev/webapp:latest",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"mountPoints": [
{
"sourceVolume": "webapp",
"containerPath": "/usr/src/app"
}
]
}
]
}
Did I specify the correct values for the paths?
Yeah, you still need to define the volumes and their mount points if you want to use them. Here is a more comprehensive guide in my opinion with regards to setting up a multi-container environment, straight out of the official docs plus they have a private repository section. Hoping that helps >> https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun
Also, consider looking into EC2 Container Service (ECS), however, Elastic Beanstalk does use it in the background to run multi-container environments but could be easier to run them directly on ECS.