I was trying to set up a load balancer to cache requests only when a query param is present in the request or a custom method on the API is called.
Either something like:
BASE_URL/api/v1/asset/UUID?static_only=true
or something like
BASE_URL/api/v1/asset/UUID:static
I've set up 2 NEG linked to the same cloud run service. One of them has the CDN active so that it can cache responses.
Since this API has some content that can require other calls to other services that are subject to change, I want to give the option to cache the request when the query param is present.
I tried to set up a url-map with the following configuration:
defaultService: DEFAULT_BACKEND
name: path-matcher-1
routeRules:
- matchRules:
- queryParameterMatches:
- exactMatch: 'true'
name: static_only
prefixMatch: /api/v1/asset/*
priority: 1
service: BACKEND-2
I couldn't find examples on the custom method, but the tries I had with the query param didn't work. Is my approach wrong?
I guess there are two parts on this question.
How do I configure the URL map to accept a path that ends with a dynamic value? (in this case the UUID of the resource) The * doesn't seem to work.
Is it possible to configure the URL map to use custom methods (AIP-136)?
Related
We need to send large (very) amount of logs to Splunk server from only one k8s pod( pod with huge traffic load), I look at the docs and found this:
https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent
However, there is a Note in the docs, that is stating about a significant resource consumption. Is there any other option to do it? I mean more efficient ? As these pods handle traffic and we cannot add the additional load, that can risk it stability...
There's an official solution to get Kubernets logs: Splunk Connect for Kubernetes. Under the hood it also uses fluentd for the logging part.
https://github.com/splunk/splunk-connect-for-kubernetes
You will find a sample config and a methodology to test it on microK8s first to get acquainted with the config and deployment: https://mattymo.io/deploying-splunk-connect-for-kubernetes-on-microk8s-with-helm/
And if you only want logs from a specific container you can use this section of the values file to select only logs from the container you're interested in:
fluentd:
# path of logfiles, default /var/log/containers/*.log
path: /var/log/containers/*.log
# paths of logfiles to exclude. object type is array as per fluentd specification:
# https://docs.fluentd.org/input/tail#exclude_path
exclude_path:
# - /var/log/containers/kube-svc-redirect*.log
# - /var/log/containers/tiller*.log
# - /var/log/containers/*_kube-system_*.log (to exclude `kube-system` namespace)
I am currently trying to create a cloudfront distribution, with an application load balancer (ALB) as an origin.
I have set up an apache Server on both my EC2 instances (which are linked to the ALB) and created two different repositories under:
/var/www/html/cache/index.html
/var/www/html/no_cache/index.html
with the two files (index.html ) referencing on which instance they are, and if they belong to the caching or non-caching setup-
After that, I tried to set up cloudfront in order to cache the files with pathpattern /cache/ (using a managed caching policy ) and not to cache the files with path pattern /no_cache/. The default setting is set up for non-caching also.
after so many trials, I figured out that cloudfront is not caching anything. the index.html file under /cache/index.html changes instantantly when I edit it and refresh the page.
I tried to see the logs and queried them from athena:
Here I have the results:
As you can see, I always get '' Miss'' as result in result_type and x_edge_detailed result type logs. From the official AWS Docs, the interpretation of miss looks like this:
Miss – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin server and returned the result to the viewer.
Could some one tell me more about the problem? I am really confused.
Is there a simple way to retrieve all items from a DynamoDB table using a mapping template in an API Gateway endpoint? I usually use a lambda to process the data before returning it but this is such a simple task that a Lambda seems like an overkill.
I have a table that contains data with the following format:
roleAttributeName roleHierarchyLevel roleIsActive roleName
"admin" 99 true "Admin"
"director" 90 true "Director"
"areaManager" 80 false "Area Manager"
I'm happy with getting the data, doesn't matter the representation as I can later transform it further down in my code.
I've been looking around but all tutorials explain how to get specific bits of data through queries and params like roles/{roleAttributeName} but I just want to hit roles/ and get all items.
All you need to do is
create a resource (without curly braces since we dont need a particular item)
create a get method
use Scan instead of Query in Action while configuring the integration request.
Configurations as follows :
enter image description here
now try test...you should get the response.
to try it out on postman deploy the api first and then use the provided link into postman followed by your resource name.
API Gateway allows you to Proxy DynamoDB as a service. Here you have an interesting tutorial on how to do it (you can ignore the part related to index to make it work).
To retrieve all the items from a table, you can use Scan as the action in API Gateway. Keep in mind that DynamoDB limits the query sizes to 1MB either for Scan and Query actions.
You can also limit your own query before it is automatically done by using the Limit parameter.
AWS DynamoDB Scan Reference
I have been trying to achive federation in my Prometheus setup. While doing this, I want to exclude some metrics to be scraped by my scraper Prometheus.
Here is my federation config:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'xxxxxxxx'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job!="kubernetes-nodes"}'
static_configs:
- targets:
- 'my-metrics-source'
As it can be seen from the config, I want to exclude any metric that has kubernetes-nodes job label, and retrieve the rest of the metrics. However, when I deploy my config, no metric is scraped.
Is it a bug in Prometheus or I simply misunderstood how the match params work?
If you really need to do this you need a primary vector selector which includes results.
Otherwise you'll get the error vector selector must contain at least one non-empty matcher.
So for example with these matchers you'll get what you are trying to achieve:
curl -G --data-urlencode 'match[]={job=~".+", job!="kubernetes-nodes"}' http://your-url.example.com/federate
As a safety measure to avoid you accidentally writing an instant vector that returns all the time series in your Prometheus, selectors must contain at least one matcher that does not match the empty string. Your selector has no such matcher (job!="kubernetes-nodes" matches an empty job label), so this is giving you an error.
You could add a selector such as __name__=~".+" however at a higher level this is an abuse of federation as it is not meant for pulling entire Prometheus servers. See https://www.robustperception.io/federation-what-is-it-good-for/
I am trying to develop my first restful service in Java and having some trouble with mapping the methods to CRUD functionality.
My uri structure is as follows and maps to basic database structure:
/databases/{schema}/{table}/
/databases is static
{schema} and {table} are dynamic and react upon the path parameter
This is what I have:
Method - URI - DATA - Comment
---------------------------------------------------------------------
GET - /databases - none - returns a list of databases
POST - /databases - database1 - creates a database named database1
DELETE - /databases - database1 - deletes the database1 database
PUT - /databases - daatbase1 - updates database1
Currently in the example above I am passing through the database name as a JSON object. However, I am unsure if this is correct. Should I instead be doing this (using DELETE method as an example):
Method - URI - DATA - Comment
---------------------------------------------------------------------
DELETE - /databases/database1 - none - deletes the database with the same name
If this is the correct method and I needed to pass extra data would the below then be correct:
Method - URI - DATA - Comment
---------------------------------------------------------------------
DELETE - /databases/database1 - some data - deletes the database with the same name
Any comments would be appreciated
REST is an interface into your domain. Thus, if you want to expose a database then CRUD will probably work. But there is much more to REST (see below)
REST-afarians will object to your service being RESTful since if does not fit one of the key constraints: The Hypermedia Constraint. But, that can be addressed if you add links to the documents (hypermedia) that your service will generate / serve. See the Hypermedia constrain. After this your users will follow links and forms to change stuff in the application. (Database, tables and rows in your example) :
- GET /database -> List of databases
- GET /database/{name} -> List of tables
- GET /database/{name}/{table}?page=1 -> First set of rows in table XXXXX
- POST /database/{name}/{table} -> Create a record
- PUT /database/{name}/{table}/{PK} -> Update a record
- DELETE /database/{name}/{table}/{PK} -> Send the record to the big PC in the sky..
Don't forget to add links to you documents!
Using REST for CRUD is kind of like putting it in a Straitjacket :) : Your URIs can represent any concept. Thus, how about trying to expose some more creative / rich URIs based on the underling resources (functionality) that you want your service or web app to do.
Take a look at at this great article: How to GET a Cup of Coffee