How to deploy grpc-web on aws? - amazon-web-services

I have a backend service that I want to expose via grpc-web.
I'm able to use the service directly via the public IP of the ec2 instance. But when I try to access it via the invocation URL of API Gateway I get a CORS error.
I want to add JWT authentication that's why I want to expose the API via API-Gateway.
Here is my configuration:
Envoy.yml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_sim
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: rtdxc_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.router
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: grpc_server
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: rtdxc_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: grpc_server
port_value: 8081
Here is my docker-compose.yml
version: '3.8'
services:
grpc_server:
image: XXXXXX
user: ${UID}:${GID}
ports:
- 8081:8081
tty: true
proxy:
ports:
- 9091:9091
- 8080:8080
image: envoyproxy/envoy:v1.22.0
volumes:
- ./envoy/envoy.yml:/etc/envoy/envoy.yaml:ro
tty:
true
I have mapped API gateway with the following configuration:
ANY / mappes to the public domain of the ec2 instance on port 8080
If I add CORS configuration in the API Gateway , The OPTION request returns 204 with propper cors headers, but POST request does not return proper headers. If I disable CORS configuration in the API gateway, the OPTIONS request also fails due to CORS issue.

Related

How to exclude k8s service 2 service communication , using Gcp Cloud Endpoints Authentication?

having this manifest file :
swagger: "2.0"
info:
version: 1.0.0
title: XXXXXXXXXXXXX
description: XXXXXXXXXXXXX
contact: { }
schemes:
- "http"
- "https"
host: "API.endpoints.PROJECT_ID.cloud.goog"
basePath: /
x-google-endpoints:
- name: "API.endpoints.PROJECT_ID.cloud.goog"
target: "XX.XXX.XXX.XX"
securityDefinitions:
firebase:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://securetoken.google.com/PROJECT_ID"
x-google-jwks_uri: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
x-google-audiences: "PROJECT_ID"
security:
- firebase: []
my endpoints are protected, but when i try to call another service inside the cluster, calling another clusterip service (inside) i got 401, as i understood, the endpoints dns records must be protected, but the communication amoung the clusterip service should remain, it is blocking the service mesh .

Google deployment manager fails when the MIG (managed instance group) and the load balancer are in the same config file

I'm working on IaC (Goole Deployment Manager - Python and YAML) and tinkering with Google's external load balancer. As part of the PoC I created:
An instance template based on ubuntu with startup script installing apache2 and an index.html file.
Create an instance group based on the above template.
Place an external HTTP load balancer in front of the instance group.
A network and a firewall rule to make debugging easier.
I heavily borrowed from Cloud Foundation Toolkit, and I'm showing only the config part of the code below. I used Python for templating.
imports:
- path: ../network/vpc/vpc_network.py
name: vpc_network.py
- path: ../network/firewall/firewall_rule.py
name: firewall_rule.py
- path: ../compute/instance_template/instance_template.py
name: instance_template.py
- path: ../compute/health_checks/health_check.py
name: health_check.py
- path: ../compute/instance_group/mananged/instance_group.py
name: instance_group.py
- path: ../network_services/load_balancing/external_loadBalancers/external_load_balancer.py
name: external_load_balancer.py
resources:
- name: demo-firewall-rules-1
type: firewall_rule.py
properties:
rules:
- name: "allow-ssh-for-all"
description: "tcp firewall enable from all"
network: $(ref.net-10-69-16.network)
priority: 1000
action: "allow"
direction: "INGRESS"
sourceRanges: ['0.0.0.0/0']
ipProtocol: "tcp"
ipPorts: ["22"]
- name: "test-health-check-for-ig"
description: "enable health check to work on the project"
network: $(ref.net-10-69-16.network)
priority: 1000
action: "allow"
direction: "INGRESS"
sourceRanges: ['130.211.0.0/22', '35.191.0.0/16']
ipProtocol: "tcp"
ipPorts: ["80", "443"]
- name: "allow-http-https-from-anywhere"
description: "allow http and https from anywhere"
network: $(ref.net-10-69-16.network)
priority: 1000
action: "allow"
direction: "INGRESS"
sourceRanges: ['0.0.0.0/0']
ipProtocol: "tcp"
ipPorts: ["80", "443"]
- name: net-10-69-16
type: vpc_network.py
properties:
subnetworks:
- region: australia-southeast1
cidr: 10.69.10.0/24
- region: australia-southeast1
cidr: 10.69.20.0/24
- region: australia-southeast1
cidr: 10.69.30.0/24
- name: mig-regional-1
type: instance_group.py
properties:
region: australia-southeast1
instanceTemplate: $(ref.it-demo-1.selfLink)
targetSize: 2
autoHealingPolicies:
- healthCheck: $(ref.demo-http-healthcheck-MIG-1.selfLink)
initialDelaySec: 400
- name: it-demo-1
type: instance_template.py
properties:
machineType: n1-standard-1
tags:
items:
- http
disks:
- deviceName: boot-disk-v1
initializeParams:
sourceImage: projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20201211
diskType: pd-ssd
networkInterfaces:
- network: $(ref.net-10-69-16.network)
subnetwork: $(ref.net-10-69-16.subnetworks[0])
accessConfigs:
- type: ONE_TO_ONE_NAT
metadata:
items:
- key: startup-script
value: |
sudo apt-get install -y apache2
sudo apt-get install -y php7.0
sudo service apache2 restart
sudo echo "Ho Ho Ho from $HOSTNAME" > /var/www/html/index.html
- name: demo-http-healthcheck-MIG-1
type: health_check.py
properties:
type: HTTP
checkIntervalSec: 5
timeoutSec: 5
unhealthyThreshold: 2
healthyThreshold: 2
httpHealthCheck:
port: 80
requestPath: /
- name: http-elb-1
type: external_load_balancer.py
properties:
portRange: 80
backendServices:
- resourceName: backend-service-for-http-1
sessionAffinity: NONE
affinityCookieTtlSec: 1000
portName: http
healthCheck: $(ref.demo-http-healthcheck-MIG-1.selfLink)
backends:
- group: $(ref.mig-regional-1.selfLink)
balancingMode: UTILIZATION
maxUtilization: 0.8
urlMap:
defaultService: backend-service-for-http-1
The problem is with the load balancer. When it is present in the same file, the deployment fails saying the managed instance group mig-regional-1 does not exist. But when I move the load balancer part out of this file and deploy the load balancer separately, (after a bit of delay) it all goes through well.
The most probable explanation is, the instance group is not ready when the load balancer is trying to reference it which is the exact situation ref.*.selfLink is supposed to handle.
This is not exactly a blocker, but it is nice to have the entire config in 1 file.
So, my questions:
Have any of you faced this before or am I missing something here?
Do you have any solution for this?
Your template need to explicit its dependencies.
You can have dependencies between your resources, such as when you
need certain parts of your environment to exist before you can deploy
other parts of the environment.
You can specify dependencies using the dependsOn option in your templates.
Seems that your External LB needs dependsOn attribute that has MIG value.
refer here to get more information about expliciting dependencies.

Need redirect my gateway to a enpoint (bad request)

well I need create a endpoint where can create a user, using express-gateway, in this have 2 ports running.
gateway http server listening on :::8181
admin http server listening on 127.0.0.1:9876
I can create a user sending my information to:
http://127.0.0.1:9876/users
I can't use this how my end point because have other configuration on my frontend, so in my frontend send my information for create user to:
http://localhost:8181/api/user/create
Now I need send my information to this http://localhost:8181/api/user/create and redirect internal in the gateway to this http://127.0.0.1:9876/users, I try something but just have bad gateway or not found. I call this end point users, so this is the script.
http:
port: 8181
admin:
port: 9876
host: localhost
apiEndpoints:
events:
host: localhost
paths: ["/api/events*", "/swagger*"]
methods: ["GET", "PATCH"]
users:
host: localhost
paths: "/api/user/create*"
url: "http://localhost:9876"
methods: ["POST", "OPTIONS"]
eventsCreate:
host: localhost
paths: "/api/events*"
methods: ["POST", "PUT", "OPTIONS"]
auth:
host: localhost
paths: "/api/auth*"
methods: ["POST", "GET", "OPTIONS"]
serviceEndpoints:
auth:
url: "http://localhost:59868"
events:
url: "http://localhost:5000"
users:
url: "http://localhost:9876"
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- jwt
- request-transformer
pipelines:
authPipeline:
apiEndpoints:
- auth
policies:
- cors:
- log:
action:
message: "auth ${req.method}"
- proxy:
action:
serviceEndpoint: auth
changeOrigin: true
eventsPipeline:
apiEndpoints:
- events
policies:
- cors:
- log:
action:
message: "events ${req.method}"
- proxy:
action:
serviceEndpoint: events
changeOrigin: true
usersPipeline:
apiEndpoints:
- users
policies:
- cors:
- log:
action:
message: "users ${req.method}"
- proxy:
action:
serviceEndpoint: users
changeOrigin: true
userPipeline:
apiEndpoints:
- events
policies:
- cors:
- log:
action:
message: "events ${req.method}"
- proxy:
action:
serviceEndpoint: events
changeOrigin: true
eventsCreatePipeline:
apiEndpoints:
- eventsCreate
policies:
- cors:
- log:
action:
message: "events ${req.method}"
- jwt:
action:
secretOrPublicKey: "MORTADELAIsMyPassion321"
checkCredentialExistence: false
- proxy:
action:
serviceEndpoint: events
changeOrigin: true
You are trying to map the incoming URL http://localhost:8181/api/user/create to the Express Gateway administration URL http://localhost:9876/users, but your proxy policy only changes the hostname and port components of the URL, not the path.
This is described in the Path Management section of the Proxy documentation.
To change the path, you'll need to either adjust the existing users service endpoint or create a new one, and add some instructions to the proxy middleware configuration:
For example, add a new ServiceEndpoint called userCreate:
serviceEndpoints:
auth:
url: "http://localhost:59868"
userCreate:
url: "http://localhost:9876/users"
users:
url: "http://localhost:9876"
And then refer to the new service endpoint and set stripPath in the proxy configuration:
- proxy:
action:
serviceEndpoint: userCreate
changeOrigin: true
stripPath: true

google cloud endpoints api_method not found on gke

404 response Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND on google cloud endpoints esp
I'm trying to deploy my API with google cloud endpoints with my backend over GKE. I'm getting this error on the Produced API logs, where shows:
Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND
and i'm getting a 404 responde from the endpoint.
The backend container is answering correctly, but when i try to post http://[service-ip]/v1/postoperation i get the 404 error. I'm guessing it's related with the api_method name but i've already changed so it's the same in the openapi.yaml, the gke deployment and the app.py.
I deployed the API service succesfully with this openapi.yaml:
swagger: "2.0"
info:
description: "API rest"
title: "API example"
version: "1.0.0"
host: "api.endpoints.gcp-project.cloud.goog"
basePath: "/v1"
# [END swagger]
consumes:
- "application/json"
produces:
- "application/json"
schemes:
# Uncomment the next line if you configure SSL for this API.
#- "https"
- "http"
paths:
"/postoperation":
post:
description: "Post operation 1"
operationId: "postoperation"
produces:
- "application/json"
responses:
200:
description: "success"
schema:
$ref: "#/definitions/Model"
400:
description: "Error"
parameters:
- description: "Description"
in: body
name: payload
required: true
schema:
$ref: "#/definitions/Resource"
definitions:
Resource:
type: "object"
required:
- "text"
properties:
tipodni:
type: "string"
dni:
type: "string"
text:
type: "string"
Model:
type: "object"
properties:
tipodni:
type: "string"
dni:
type: "string"
text:
type: "string"
mundo:
type: "string"
cluster:
type: "string"
equipo:
type: "string"
complejidad:
type: "string"
Then i tried to configure the backend and esp with this deploy.yaml and lb-deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: api-deployment
namespace: development
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: api1
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: api1
spec:
volumes:
- name: google-cloud-key
secret:
secretName: secret-key
containers:
- name: api-container
image: gcr.io/gcp-project/docker-pqr:IMAGE_TAG_PLACEHOLDER
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
ports:
- containerPort: 5000
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:5000",
"--service=api.endpoints.gcp-project.cloud.goog",
"--rollout_strategy=managed"
]
ports:
- containerPort: 8081
kind: Service
metadata:
name: "api1-lb"
namespace: development
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
# loadBalancerIP: "172.30.33.221"
selector:
app: api1
ports:
- protocol: TCP
port: 80
targetPort: 8081
my flask app that serves the api, is this app.py
app = Flask(__name__)
categorizador = Categorizador(model_properties.paths)
#app.route('/postoperation', methods=['POST'])
def postoperation():
text = request.get_json().get('text', '')
dni = request.get_json().get('dni', '')
tipo_dni = request.get_json().get('tipo_dni', '')
categoria,subcategoria = categorizador.categorizar(text)
content = {
'tipodni': tipo_dni,
'dni': dni,
'text': text,
'mundo': str(categoria),
'cluster': str(subcategoria),
'equipo': '',
'complejidad': ''
}
return jsonify(content)
Looks like you need to configure route in your flask app.
Try this:
#app.route('/v1/postoperation', methods=['POST'])
Some bits from kubectl expose -h
--port='' - The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port='' - Name or number for the port on the container that the service should direct traffic to.
Optional.
While proxy directing your trafic to --backend=127.0.0.1:5000, use container name isntead --backend=api-container:5000.

how to use scopes in Oauth2.0 to authorize user using Express Gateway(Microservice API Gateway)?

I did scopes with key-auth mechanism is perfectly working, but when i use scopes with Oauth2.0 mechanism, i am getting unauthorized error.
I did without scopes, the Oauth2.0 mechanism is working perfectly. Please suggest how to solve this problem?
Following is Gateway YAML configuration:
http:
port: 8080
admin:
port: 9876
host: localhost
apiEndpoints:
api:
- host: 'localhost'
paths: ['/user', '/user/:id']
methods: ["GET"]
scopes: ["user"]
- host: 'localhost'
paths: ['/user', '/user/:id']
methods: ["PUT", "POST", "DELETE"]
scopes: ["admin"]
myApiRest:
host: 'localhost'
paths: '/posts'
serviceEndpoints:
jsonplaceholder:
url: 'http://localhost:8899'
restDummyService:
url: 'https://jsonplaceholder.typicode.com'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
- name: one
apiEndpoints:
- api
policies:
- oauth2:
#- basic-auth:
#- key-auth:
- proxy:
- action:
serviceEndpoint: jsonplaceholder
changeOrigin: true
- name: two
apiEndpoints:
- myApiRest
policies:
#- key-auth:
- proxy:
- action:
serviceEndpoint: restDummyService
changeOrigin: true