Getting unauthorize message while using #kubernetes/client-node - amazon-web-services

const { v4: uuidv4 } = require("uuid");
const k8s = require("#kubernetes/client-node");
const kc = new k8s.KubeConfig();
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
let USER_ID=Date.now(), LEVEL=1, DIFFICULTY="start", DRONE="dr1", RESET=false
const definition = {
apiVersion: "v1",
kind: "Pod",
metadata: {
annotations: {
key1: "value1",
key2: "value2",
},
name: USER_ID,
labels: { app: "simulator-app", name: USER_ID },
},
spec: {
containers: [
{
name: "simulator-app",
image:
".....dkr.ecr.eu-central-1.amazonaws.com/simulator:latest",
ports: [{ containerPort: 8080 }, { containerPort: 9090 }],
tty: true,
stdin: true,
imagePullPolicy: "IfNotPresent",
command: ["/bin/bash", "-c"],
args: [
'echo START....;LEVEL=%s;DIFFICULTY=%s;DRONE=%s;echo "LEVEL=${LEVEL}";echo "DIFFICULTY=${DIFFICULTY}";echo "DRONE=${DRONE}";cd /home/;source ~/.bashrc;echo "Bashrc Sourced";pwd;ls;echo "Starting simulator with ${LEVEL} level, ${DRONE} vehicle and ${DIFFICULTY} difficulty";source "/opt/ros/foxy/setup.bash";source "/home/rossimulator/install/setup.bash";ros2 launch simulator_bringup sim_nodes.launch.py level:=${LEVEL} drone_type:=${DRONE} difficulty:=${DIFFICULTY};echo Done;' %
(LEVEL, DIFFICULTY, DRONE),
],
},
],
},
};
k8sApi.createNamespacedPod("Default", definition)
.then(console.log)
.catch(console.log);
Getting the below error on the console
{
response: {
statusCode: 401,
body: {
kind: "Status",
apiVersion: "v1",
metadata: {},
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401,
},
headers: {
"audit-id": "07eba07d-6121-492f-9993-ece3fa8827c5",
"cache-control": "no-cache, private",
"content-type": "application/json",
date: "Mon, 28 Mar 2022 02:48:59 GMT",
"content-length": "129",
connection: "close",
},
request: {
uri: {
protocol: "https:",
slashes: true,
auth: null,
host: ".....gr7.eu-central-1.eks.amazonaws.com",
port: 443,
hostname: "......gr7.eu-central-1.eks.amazonaws.com",
hash: null,
search: null,
query: null,
pathname: "/api/v1/namespaces/Default/pods",
path: "/api/v1/namespaces/Default/pods",
href: "https://..................eu-central-1.eks.amazonaws.com/api/v1/namespaces/Default/pods",
},
method: "POST",
headers: {
Accept: "application/json",
Authorization: "Bearer k8s-aws-v1.....",
"content-type": "application/json",
"content-length": 506,
},
},
},
body: {
kind: "Status",
apiVersion: "v1",
metadata: {},
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401,
},
statusCode: 401,
name: "HttpError",
};
What could be the possible reason for getting this error?
While the it works in the pythong this way
from flask import Flask, jsonify
import time
from kubernetes import client, config
import uuid
from kubernetes.client.rest import ApiException
app = Flask(__name__)
config.load_kube_config(
context="arn:aws:eks:eu-central-1:9010......:cluster/sim-cluster"
)
v1 = client.CoreV1Api()
# definition - same as above
v1.create_namespaced_pod(body=definition, namespace="default")
I did not found load_kube_config equivalent method in the library

Related

Adding organization and app in serverless.ts breaks the application

i am experiencing very strange behaviour with my serverless application,
here is my serverless.ts file
import type { AWS } from '#serverless/typescript';
import {
hello,
} from '#functions';
const serverlessConfiguration: AWS = {
service: 'users',
frameworkVersion: '2',
// org: '<MY-ORG-NAME>',
// app: '<MY-APP-NAME>',
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node14',
define: { 'require.resolve': undefined },
platform: 'node',
},
'serverless-offline': {
httpPort: 4000,
},
ngrokTunnel: {
tunnels: [
{
port: 4000,
},
],
},
avatarUploadBucket: '<NAME>',
userReplicationTopic: '<NAME>',
replicatedUserRemovalTopic:
'<NAME>',
},
plugins: [
'serverless-esbuild',
'serverless-offline',
'serverless-ngrok-tunnel',
],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
profile: '<MY-AWS-PROFILE>',
region: '<MY-REGION>',
stage: 'dev',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
iamRoleStatements: [
{
Effect: 'Allow',
Action: ['s3:*', 'sns:*'],
Resource: '*',
},
],
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
NODE_OPTIONS: '--enable-source-maps --stack-trace-limit=1000',
USER_REPLICATION_TOPIC_ARN: {
Ref: 'UserReplicationSNSTopic',
},
REPLICATED_USER_REMOVAL_TOPIC_ARN: {
Ref: 'ReplicatedUserRemovalSNSTopic',
},
},
lambdaHashingVersion: '20201221',
},
functions: {
hello
},
resources: {
Resources: {
AvatarUploadBucket: {
Type: 'AWS::S3::Bucket',
Properties: {
BucketName: '${self:custom.avatarUploadBucket}',
AccessControl: 'PublicRead',
},
},
UserReplicationSNSTopic: {
Type: 'AWS::SNS::Topic',
Properties: {
TopicName: '${self:custom.userReplicationTopic}',
},
},
ReplicatedUserRemovalSNSTopic: {
Type: 'AWS::SNS::Topic',
Properties: {
TopicName: '${self:custom.replicatedUserRemovalTopic}',
},
},
},
},
outputs: {
snsTopics: {
ReplicatedUserRemovalSNSTopicARN: '!Ref ReplicatedUserRemovalSNSTopic',
},
},
};
module.exports = serverlessConfiguration;
i currently have org and app commented out and it works but if i uncomment them, and send request to the api endpoint i get following error Runtime.ImportModuleError: Error: Cannot find module 's_hello'
it is not about the contents of file since i get this error for every microservice i have and for every function in that microservice, for this commented out version, this is the handler (in aws console)
but if i uncomment it, this is the result
why does this happen?
P.S: I can see the upload also happen in serverless dashboard.

How to import swagger/?format=openapi to postman from django-rest-swagger without error of format not recognized

Our project use django-rest-swagger to manage API, and we would like to export all api and import Postman, I can get JSON by below url localhost:5000/swagger/?format=openapi, but when I import the file, postman says Error while importing: format not recognized, How to import swagger/?format=openapi to postman from django-rest-swagger without error of format not recognized?
Is there anyone who knows some easy way to solve it? Thanks so much for any advice!!!!!
{
swagger: "2.0",
info: {
title: "TestProjectAPI",
description: "",
version: ""
},
host: "localhost:5000",
schemes: [
"http"
],
paths: {
/api-token/: {
post: {
operationId: "api-token_post",
responses: {
201: {
description: ""
}
},
parameters: [
{
name: "data",
in: "body",
schema: {
type: "object",
properties: {
pic_id: {
description: "",
type: "string"
},
phonenumber: {
description: "",
type: "string"
},
checkcode: {
description: "",
type: "string"
},
user_phone: {
description: "",
type: "string"
},
phone_code: {
description: "",
type: "string"
},
username: {
description: "",
type: "string"
},
password: {
description: "",
type: "string"
}
}
}
}
],
description: "User Login",
summary: "User Login",
consumes: [
"application/json"
],
tags: [
"api-token"
]
}
},
/porject_management/: {
get: {
operationId: "porject_management_list",
responses: {
200: {
description: ""
}
},
parameters: [
{
name: "page",
required: false,
in: "query",
description: "A page number within the paginated result set.",
type: "integer"
},
{
name: "page_size",
required: false,
in: "query",
description: "Number of results to return per page.",
type: "integer"
},
{
name: "search",
required: false,
in: "query",
description: "A search term.",
type: "string"
},
{
name: "project",
required: false,
in: "query",
description: "",
type: "string"
},
{
name: "state",
required: false,
in: "query",
description: "",
type: "number"
},
{
name: "ordering",
required: false,
in: "query",
description: "Which field to use when ordering the results.",
type: "string"
}
],
description: "porject management",
summary: "porject management",
tags: [
"porject_management_post"
]
},
post: {
operationId: "porject_management_post",
responses: {
201: {
description: ""
}
},
parameters: [
{
name: "data",
in: "body",
schema: {
type: "object",
properties: {
project: {
description: "",
type: "string"
},
tc_code: {
description: "",
type: "string"
},
visitors_number: {
description: "",
type: "integer"
},
site_selection: {
description: "",
type: "string"
},
contact_name: {
description: "",
type: "string"
},
contact_number: {
description: "",
type: "string"
},
remark: {
description: "",
type: "string"
},
type: {
description: "",
type: "integer"
},
state: {
description: "",
type: "integer"
},
status: {
description: "",
type: "integer"
},
creater: {
description: "",
type: "string"
},
modifier: {
description: "",
type: "string"
}
}
}
}
],
description: "Porject management",
summary: "Porject management",
consumes: [
"application/json"
],
tags: [
"homemanager"
]
}
},
securityDefinitions: {
basic: {
type: "basic"
}
}
}
Have you tried using:
python3 manage.py generateschema --file openapi-schema.yml
in terminal? Then you can directly import the schema to your POSTMAN. You are providing JSON format, use yaml format for postman it should work.
Finally, I solved my problem by eolink.com,
Firstly, import JSON from localhost:5000/swagger/?format=openapi
Secondly, export Swagger by eolink.com, and then you can import that file to postman!!!

Istio VirtualService not used in k8s Service

Hi I'm very newby in Istio/K8s, and I'm trying to make a service that I have test-service to use a new VirtualService that I've created.
Here the steps that I did
kubectl config set-context --current --namespace my-namespace
I create my VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-service
namespace: my-namespace
spec:
hosts:
- test-service
http:
- fault:
delay:
fixedDelay: 60s
percentage:
value: 100
route:
- destination:
host: test-service
port:
number: 9100
Then I apply into K8s
kubectl apply -f test-service.yaml
But now when I invoke the test-service using gRPC I can reach the service, but the fault with the delay is not happening.
I dont know in which log I can see of this test-service is using the VirtualService that I created or not
Here my gRPC Service config:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test-service",
"namespace": "my-namespace",
"selfLink": "/api/v1/namespaces/my-namespace/services/test-service",
"uid": "8a9bc730-4125-4b52-b373-7958796b5df7",
"resourceVersion": "317889736",
"creationTimestamp": "2021-07-07T10:39:54Z",
"labels": {
"app": "test-service",
"app.kubernetes.io/managed-by": "Helm",
"version": "v1"
},
"annotations": {
"meta.helm.sh/release-name": "test-service",
"meta.helm.sh/release-namespace": "my-namespace"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-07-07T10:39:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:version": {}
}
},
"f:spec": {
"f:ports": {
".": {},
"k:{\"port\":9100,\"protocol\":\"TCP\"}": {
".": {},
"f:port": {},
"f:protocol": {},
"f:targetPort": {}
}
},
"f:selector": {
".": {},
"f:app": {}
},
"f:sessionAffinity": {},
"f:type": {}
}
}
},
{
"manager": "dashboard",
"operation": "Update",
"apiVersion": "v1",
"time": "2022-01-14T15:51:28Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:spec": {
"f:ports": {
"k:{\"port\":9100,\"protocol\":\"TCP\"}": {
"f:name": {}
}
}
}
}
}
]
},
"spec": {
"ports": [
{
"name": "test-service",
"protocol": "TCP",
"port": 9100,
"targetPort": 9100
}
],
"selector": {
"app": "test-service"
},
"clusterIP": "****************",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
According to the Istio documentation, configuring fault only works for HTTP traffic, not for gRPC:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection

Aws lambda , dynamodb error after deolayment : UnknownEndpoint: Inaccessible host at port `8008'. This service may not be available in the `eu-west-1'

I am testing a fass (function as a service ) using AWS lambda
I am getting following error on testing the api after serverless deploy
query-error: UnknownEndpoint: Inaccessible host: 'localhost' at port `8008'. This service may not be available in the 'eu-west-1' region.","status":"error"}"
but when running in the local using serverless offline everything works as expected
what could be the reason for this error?
also in the server validation error works if wrong params are passed, this error shows up when query is being executed
serveless.ts
/* eslint no-use-before-define: 0 */
import type { AWS } from "#serverless/typescript";
// DynamoDB
import dynamoDbTables from "./resources/dynamodb-tables";
// Functions
import functions from "./resources/functions";
const serverlessConfiguration: AWS = {
service: "lead-management-app",
frameworkVersion: "2",
custom: {
region: "${opt:region, self:provider.region}",
stage: "${opt:stage, self:provider.stage}",
prefix: "${self:service}-${self:custom.stage}",
lead_table: "${self:service}-leads-${opt:stage, self:provider.stage}",
interest_table:
"${self:service}-interests-${opt:stage, self:provider.stage}",
table_throughputs: {
prod: 5,
default: 1,
},
table_throughput:
"${self:custom.table_throughputs.${self:custom.stage}, self:custom.table_throughputs.default}",
dynamodb: {
stages: ["dev"],
start: {
port: 8008,
inMemory: true,
heapInitial: "200m",
heapMax: "1g",
migrate: true,
seed: true,
convertEmptyValues: true,
// Uncomment only if you already have a DynamoDB running locally
// noStart: true
},
},
["serverless-offline"]: {
httpPort: 3000,
babelOptions: {
presets: ["env"],
},
},
profile: {
prod: "prodAccount",
dev: "devAccount",
},
},
plugins: [
"serverless-bundle",
"serverless-dynamodb-local",
"serverless-offline",
"serverless-dotenv-plugin",
],
provider: {
name: "aws",
runtime: "nodejs14.x",
stage: "dev",
region: "eu-west-1",
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: "1",
NODE_OPTIONS: "--enable-source-maps --stack-trace-limit=1000",
REGION: "${self:custom.region}",
STAGE: "${self:custom.stage}",
LEADS_TABLE: "${self:custom.lead_table}",
INTERESTS_TABLE: "${self:custom.interest_table}",
},
iamRoleStatements: [
{
Effect: "Allow",
Action: [
"dynamodb:DescribeTable",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
],
Resource: [
{ "Fn::GetAtt": ["LeadsTable", "Arn"] },
{ "Fn::GetAtt": ["InterestsTable", "Arn"] },
],
},
],
profile: "${self:custom.profile.${self:custom.stage}}",
lambdaHashingVersion: "20201221",
},
// import the function via paths
functions,
package: { individually: true },
resources: {
Resources: dynamoDbTables,
},
};
module.exports = serverlessConfiguration;
Finally found the culprit , it was the env variable that i set in local

How to select aws lambda function names using node SDK?

At the CLi I can do
aws lambda list-functions
and get all the functions detail
Also I can do
aws lambda list-functions --query 'Functions[*].[FunctionName]' --output text
and get a simple list of just the function names.
How can I do that in a lambda using the SDK?
I tried
exports.handler = function (event) {
const AWS = require('aws-sdk');
const lambda = new AWS.Lambda({ apiVersion: '2015-03-31' });
var lambs = lambda.listFunctions();
console.log(lambs);
};
and I have aws lambda full access role
But i get the output below
e,
s3DisableBodySigning: true,
computeChecksums: true,
convertResponseTypes: true,
correctClockSkew: false,
customUserAgent: null,
dynamoDbCrc32: true,
systemClockOffset: 0,
signatureVersion: 'v4',
signatureCache: true,
retryDelayOptions: {},
useAccelerateEndpoint: false,
clientSideMonitoring: false,
endpointDiscoveryEnabled: false,
endpointCacheSize: 1000,
hostPrefixEnabled: true,
stsRegionalEndpoints: null
},
isGlobalEndpoint: false,
endpoint: Endpoint {
protocol: 'https:',
host: 'lambda.us-east-2.amazonaws.com',
port: 443,
hostname: 'lambda.us-east-2.amazonaws.com',
pathname: '/',
path: '/',
href: 'https://lambda.us-east-2.amazonaws.com/'
},
_events: { apiCallAttempt: [Array], apiCall: [Array] },
MONITOR_EVENTS_BUBBLE: [Function: EVENTS_BUBBLE],
CALL_EVENTS_BUBBLE: [Function: CALL_EVENTS_BUBBLE],
_clientId: 2
},
operation: 'listFunctions',
params: {},
httpRequest: HttpRequest {
method: 'POST',
path: '/',
headers: {
'User-Agent': 'aws-sdk-nodejs/2.536.0 linux/v12.13.0 exec-env/AWS_Lambda_nodejs12.x'
},
body: '',
endpoint: Endpoint {
protocol: 'https:',
host: 'lambda.us-east-2.amazonaws.com',
port: 443,
hostname: 'lambda.us-east-2.amazonaws.com',
pathname: '/',
path: '/',
href: 'https://lambda.us-east-2.amazonaws.com/',
constructor: [Function]
},
region: 'us-east-2',
_userAgent: 'aws-sdk-nodejs/2.536.0 linux/v12.13.0 exec-env/AWS_Lambda_nodejs12.x'
},
startTime: 2019-12-04T20:30:18.812Z,
response: Response {
request: [Circular],
data: null,
error: null,
retryCount: 0,
redirectCount: 0,
httpResponse: HttpResponse {
statusCode: undefined,
headers: {},
body: undefined,
streaming: false,
stream: null
},
maxRetries: 3,
maxRedirects: 10
},
_asm: AcceptorStateMachine {
currentState: 'validate',
states: {
validate: [Object],
build: [Object],
afterBuild: [Object],
sign: [Object],
retry: [Object],
afterRetry: [Object],
send: [Object],
validateResponse: [Object],
extractError: [Object],
extractData: [Object],
restart: [Object],
success: [Object],
error: [Object],
complete: [Object]
}
},
_haltHandlersOnError: false,
_events: {
validate: [
[Function],
[Function],
[Function: VALIDATE_REGION],
[Function: BUILD_IDEMPOTENCY_TOKENS],
[Function: VALIDATE_PARAMETERS]
],
afterBuild: [
[Function],
[Function: SET_CONTENT_LENGTH],
[Function: SET_HTTP_HOST]
],
restart: [ [Function: RESTART] ],
sign: [ [Function], [Function], [Function] ],
validateResponse: [ [Function: VALIDATE_RESPONSE], [Function] ],
send: [ [Function] ],
httpHeaders: [ [Function: HTTP_HEADERS] ],
httpData: [ [Function: HTTP_DATA] ],
httpDone: [ [Function: HTTP_DONE] ],
retry: [
[Function: FINALIZE_ERROR],
[Function: INVALIDATE_CREDENTIALS],
[Function: EXPIRED_SIGNATURE],
[Function: CLOCK_SKEWED],
[Function: REDIRECT],
[Function: RETRY_CHECK],
[Function: API_CALL_ATTEMPT_RETRY]
],
afterRetry: [ [Function] ],
build: [ [Function: buildRequest] ],
extractData: [ [Function: extractData], [Function: extractRequestId] ],
extractError: [ [Function: extractError], [Function: extractRequestId] ],
httpError: [ [Function: ENOTFOUND_ERROR] ],
success: [ [Function: API_CALL_ATTEMPT] ],
complete: [ [Function: API_CALL] ]
},
emit: [Function: emit],
API_CALL_ATTEMPT: [Function: API_CALL_ATTEMPT],
API_CALL_ATTEMPT_RETRY: [Function: API_CALL_ATTEMPT_RETRY],
API_CALL: [Function: API_CALL]
}END RequestId: dc9caa5c-42b1-47e9-8136-80c3fbdddbc5
REPORT RequestId: dc9caa5c-42b1-47e9-8136-80c3fbdddbc5 Duration: 45.81 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 86 MB
AWS SDK calls return an AWS.Request object, not the response to the actual API call, which typically arrives asynchronously.
You need to add a callback handler like so:
lambda.listFunctions((err, data) => {
if (err) {
console.err(err);
} else {
data.Functions.forEach(func => console.log(func.FunctionName));
}
});
Or simply use async/await, like so (note that the enclosing function must be async):
const AWS = require('aws-sdk');
const lambda = new AWS.Lambda();
exports.handler = async (event) => {
const funcs = await lambda.listFunctions().promise();
funcs.Functions.forEach(func => console.log(func.FunctionName));
}
The data/funcs returned to you will be a JavaScript object including an array of functions. See the SDK reference for specifics.
Ideally, use the async/await form. It's simpler, less prone to error, and more modern.