How to execute command in a pod (kubernetes) using API? - google-cloud-platform

I'm trying to execute command in a contianer (in a Kubernetes POD on GKE with kubernetes 1.1.2).
Reading documentation I understood that I can use GET or POST query to open websocket connection on API endpoint to execute command. When I use GET, it does not work completly, returns error. When I try to use POST, something like that could work probably (but it's not):
curl 'https://admin:xxx#IP/api/v1/namespaces/default/pods/hello-whue1/exec?stdout=1&stderr=1&command=ls' -H "Connection: upgrade" -k -X POST -H 'Upgrade: websocket'
repsponse for that is
unable to upgrade: missing upgrade headers in request: http.Header{"User-Agent":[]string{"curl/7.44.0"}, "Content-Length":[]string{"0"}, "Accept":[]string{"*/*"}, "Authorization":[]string{"Basic xxx=="}, "Connection":[]string{"upgrade"}, "Upgrade":[]string{"websocket"}}
Looks like that should be enough to upgrade post request and start using websocket streams, right? What I'm missing?
I was also pointed that opening websocket with POST is probably violation of websocket protocol (only GET should work?).
Also

You'll probably have the best time using the Kubernetes client library, which is the same code the Kubectl uses, but if for some reason that isn't an option, than my best suggestion is to look through the client library's code for executing remote commands and seeing what headers it sets.

Use websocket client it's work.
In my local kuberenetes cluster, the connection metadata like this:
ApiServer = "172.21.1.11:8080"
Namespace = "default"
PodName = "my-nginx-3855515330-l1uqk"
ContainerName = "my-nginx"
Commands = "/bin/bash"
the connect url:
"ws://172.21.1.11:8080/api/v1/namespaces/default/pods/my-nginx-3855515330-l1uqk/exec?container=my-nginx&stdin=1&stdout=1&stderr=1&tty=1&command=%2Fbin%2Fbash"
On maxos, a wsclient CLI tool: wscat, you can use it as a test tool:
You can access the websocket example: "https://github.com/lth2015/container-terminal"

You can using a websocket client to exec into a pod, a quick demo.
javascript code shows how to connect to kubernetes:
<script type="text/javascript">
angular.module('exampleApp', ['kubernetesUI'])
.config(function(kubernetesContainerSocketProvider) {
kubernetesContainerSocketProvider.WebSocketFactory = "CustomWebSockets";
})
.run(function($rootScope) {
$rootScope.baseUrl = "ws://localhost:8080";
$rootScope.selfLink = "/api/v1/namespaces/default/pods/my-nginx-3855515330-l1uqk";
$rootScope.containerName = "my-nginx";
$rootScope.accessToken = "";
$rootScope.preventSocket = true;
})
/* Our custom WebSocket factory adapts the url */
.factory("CustomWebSockets", function($rootScope) {
return function CustomWebSocket(url, protocols) {
url = $rootScope.baseUrl + url;
if ($rootScope.accessToken)
url += "&access_token=" + $rootScope.accessToken;
return new WebSocket(url, protocols);
};
});
</script>
you can test it in other language.

Related

AWS HTTP API Gateway 503 Service Unavailable

I have an HTTP API Gateway with a HTTP Integration backend server on EC2. The API has lots of queries during the day and looking at the logs i realized that the API is returning sometimes a 503 HTTP Code with a body:
{ "message": "Service Unavailable" }
When i found out this, i tried the API and running the HTTP requests many times on Postman, when i try twenty times i get at least one 503.
I then thought that the HTTP Integration Server was busy but the server is not loaded and i tried going directly to the HTTP Integration Server and i get 200 responses all the times.
The timeout parameter is set to 30000ms and the endpoint average response time is 200ms so timeout is not a problem. Also the HTTP 503 is not after 30 seconds of the request but instantly.
Can anyone help me?
Thanks
I solved this issue by editing the keep-alive connection parameters of my internal integration server. The AWS API Gateway needs the keep alive parameters on a standard configuration, so I started tweaking my NGINX server parameters until I solved the issue.
Had the same issue on a selfmade Microservice with Node that was integrated into AWS API-Gateway. After some reconfiguration of the Cloudwatch-Logs I got further indicator on what is wrong: INTEGRATION_NETWORK_FAILURE
Verify your problem is alike - i.e. through elaborated log output
In API-Gateway - Logging add more output in "Log format"
Use this or similar content for "Log format":
{"httpMethod":"$context.httpMethod","integrationErrorMessage":"$context.integrationErrorMessage","protocol":"$context.protocol","requestId":"$context.requestId","requestTime":"$context.requestTime","resourcePath":"$context.resourcePath","responseLength":"$context.responseLength","routeKey":"$context.routeKey","sourceIp":"$context.identity.sourceIp","status":"$context.status","errMsg":"$context.error.message","errType":"$context.error.responseType","intError":"$context.integration.error","intIntStatus":"$context.integration.integrationStatus","intLat":"$context.integration.latency","intReqID":"$context.integration.requestId","intStatus":"$context.integration.status"}
After using API-Gateway Endpoint and failing consult the logs again - should be looking like that:
Solve in NodeJS Microservice (using Express)
Add timeouts for headers and keep-alive on express servers socket configuration when upon listening.
const app = require('express')();
// if not already set and required to advertise the keep-alive through HTTP-Response you might want to use this
/*
app.use((req: Request, res: Response, next: NextFunction) => {
res.setHeader('Connection', 'keep-alive');
res.setHeader('Keep-Alive', 'timeout=30');
next();
});
*/
/* ..you r main logic.. */
const server = app.listen(8080, 'localhost', () => {
console.warn(`⚡️[server]: Server is running at http://localhost:8080`);
});
server.keepAliveTimeout = 30 * 1000; // <- important lines
server.headersTimeout = 35 * 1000; // <- important lines
Reason
Some AWS Components seem to demand a connection kept alive - even if server responding otherwise (connection: close). Upon reusage in API Gateway (and possibly AWS ELBs) the recycling will fail because other-side most likely already closed hence the assumed "NETWORK-FAILURE".
This error seems intermittent - since at least the API-Gateway seems to close unused connections after a while providing a clean execution the next time. I can only assume they do that for high-performance and not divert to anything less.

AWS DocumentDB connection problem with TLS

When TLS is disabled, I can connect successfully through my lambda function using the same code as shown here - https://docs.aws.amazon.com/documentdb/latest/developerguide/connect.html#w139aac29c11c13b5b7
However, when I enable TLS and use the TLS enabled code sample from above link, my lambda function times out. I've downloaded rds combined ca pem file through wget and I am deploying the pem file along with my code to the AWS lambda.
This is the code where my execution stops and times out:
caFilePath = "rds-combined-ca-bundle.pem"
var connectionStringTemplate = "mongodb://%s:%s#%s:27017/dbname?ssl=true&sslcertificateauthorityfile=%s"
var connectionURI = fmt.Sprintf(connectionStringTemplate, secret["username"], secret["password"], secret["host"], caFilePath)
fmt.Println("Connection String", connectionURI)
client, err := mongo.NewClient(options.Client().ApplyURI(connectionURI))
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
I don't see any errors in the cloudwatch logs after the "Connection string" print.
I suspect Its an issue with your VPC design
Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC,
check the last paragraph
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
also, the below link is giving detailed instructions
https://blog.webiny.com/connecting-to-aws-documentdb-from-a-lambda-function-2b666c9e4402
Can you try creating lambda test function using python and see if your having the issue
import pymongo
import sys
##Create a MongoDB client, open a connection to Amazon DocumentDB as a replica set and specify the read preference as secondary preferred
client = pymongo.MongoClient('mongodb://<dbusername>:<dbpassword>#mycluster.node.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'Amazon DocumentDB'})
##Find the document that was previously written
x = col.find_one({'hello':'Amazon DocumentDB'})
##Print the result to the screen
print(x)
##Close the connection
client.close()

Is it possible to connect to the Google IOTCore MQTT Bridge via Javascript?

I've been trying to use the javacscript version of the Eclipse Paho MQTT client to access the Google IOTCore MQTT Bridge, as suggested here:
https://cloud.google.com/iot/docs/how-tos/mqtt-bridge
However, whatever I do, any attempt to connect with known good credentials (working with other clients) results in this connection error:
errorCode: 7, errorMessage: "AMQJS0007E Socket error:undefined."
Not much to go on there, so I'm wondering if anyone has ever been successful connecting to the MQTT Bridge via Javascript with Eclipse Paho, the client implementation suggested by Google in their documentation.
I've gone through their troubleshooting steps, and things seem to be on the up and up, so no help there either.
https://cloud.google.com/iot/docs/troubleshooting
I have noticed that in their docs they have sample code for Java/Python, etc, but not Javascript, so I'm wondering if it's simply not supported and their documentation just fails to mention as such.
I've simplified my code to just use the 'Hello World' example in the Paho documentation, and as far as I can tell I've done things correctly (including using my device path as the ClientID, the JWT token as the password, specifying an 'unused' userName field and explicitly requiring MQTT v3.1.1).
In the meantime I'm falling back to polling via their HTTP bridge, but that has obvious latency and network traffic shortcomings.
// Create a client instance
client = new Paho.MQTT.Client("mqtt.googleapis.com", Number(8883), "projects/[my-project-id]/locations/us-central1/registries/[my registry name]/devices/[my device id]");
// set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
// connect the client
client.connect({
mqttVersion: 4, // maps to MQTT V3.1.1, required by IOTCore
onSuccess:onConnect,
onFailure: onFailure,
userName: 'unused', // suggested by Google for this field
password: '[My Confirmed Working JWT Token]' // working JWT token
function onFailure(resp) {
console.log(resp);
}
// called when the client connects
function onConnect() {
// Once a connection has been made, make a subscription and send a message.
console.log("onConnect");
client.subscribe("World");
message = new Paho.MQTT.Message("Hello");
message.destinationName = "World";
client.send(message);
}
// called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0) {
console.log("onConnectionLost:"+responseObject.errorMessage);
}
}
// called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:"+message.payloadString);
}
I'm a Googler (but I don't work in Cloud IoT).
Your code looks good to me and it should work. I will try it for myself this evening or tomorrow and report back to you.
I've spent the past day working on a Golang version of the samples published on Google's documentation. Like you, I was disappointed to not see all Google's regular languages covered by samples.
Are you running the code from a browser or is it running on Node.JS?
Do you have a package.json (if Node) that you would share too please?
Update
Here's a Node.JS (JavaScript but non-browser) that connects to Cloud IoT, subscribes to /devices/${DEVICE}/config and publishes to /devices/${DEVICE}/events.
https://gist.github.com/DazWilkin/65ad8890d5f58eae9612632d594af2de
Place all the files in the same directory
Replace values in index.js of the location of Google's CA and your key
Replaces [[YOUR-X]] values in config.json
Use "npm install" to pull the packages
Use node index.js
You should be able to pull messages from the Pub/Sub subscription and you should be able to send config messages to the device.
Short answer is no. Google Cloud IoT Core doesn't support WebSockets.
All the JavaScript MQTT libraries use WebSocket because JavaScript is restricted to perform HTTP requests and WebSocket connections only.

C++, Mongoose: How to make a POST request?

I'm working on a project that uses Mongoose, and I need to make a POST request to another server. I don't see an example of how to do this in their examples list, does anyone know how to do this?
EDIT to add more detail:
I'm working within a larger C++ app and need to create a simple server such that a user can query the app for information. Right now, I start the server like this:
Status sampleCmd::startServer()
{
Status stat = MS::kSuccess;
struct mg_server *server;
// Create and configure the server
server = mg_create_server(NULL, ev_handler);
mg_set_option(server, "listening_port", "8080");
stopServer = false;
printf("Starting on port %s\n", mg_get_option(server, "listening_port"));
while (!stopServer) //for (;;)
{
mg_poll_server(server, 1000);
}
// Cleanup, and free server instance
mg_destroy_server(&server);
return stat;
}
In my event handler, I parse the provided URI for a particular one and then run some commands with the application's API. I need to send these results back to a server for the user to see. It's this latter step that is unclear to me. It seems odd that a web server library wouldn't have some client functionality, don't servers need to talk to other servers?
Okay, it turns out I was thinking about this wrong. I needed to respond to the POST request I was getting. So using mg_printf_data(...) with the connection object worked for me.

boto.sqs connect to non-aws endpoint

I'm currently in need of connecting to a fake_sqs server for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:
fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);
and then:
queue = connAmazon.get_queue('some_queue')
but it fails to retrieve the queue object,it returns None. Has anyone achieved to connect to an own sqs instance ?
Here's how to create an SQS connection that connects to fake_sqs:
region = boto.sqs.regioninfo.SQSRegionInfo(
connection=None,
name='fake_sqs',
endpoint='localhost', # or wherever fake_sqs is running
connection_cls=boto.sqs.connection.SQSConnection,
)
conn = boto.sqs.connection.SQSConnection(
aws_access_key_id='fake_key',
aws_secret_access_key='fake_secret',
is_secure=False,
port=4568, # or wherever fake_sqs is running
region=region,
)
region.connection = conn
# you can now work with conn
# conn.create_queue('test_queue')
Be aware that, at the time of this writing, the fake_sqs library does not respond correctly to GET requests, which is how boto makes many of its requests. You can install a fork that has patched this functionality here: https://github.com/adammck/fake_sqs