I've written a piece of code that adds and retrieves entities from the Datastore based on one filter (and order on the same property) - that worked fine. But when I tried adding filters for more properties, I got:
PreconditionFailed: 412 no matching index found. recommended index is:- kind: Temperature properties: - name: DeviceID - name: created
Eventually I figured out that I need to create index.yaml. Mine looks like this:
indexes:
- kind: Temperature
ancestor: no
properties:
- name: ID
- name: created
- name: Value
And it seems to be recognised, as the console shows:
that it has been updated
Yet when I run my code (specifically this part with two properties), it doesn't work (still getting the above-mentioned error) (the code is running on the Compute Engine).
query.add_filter('created', '>=', newStart)
query.add_filter('created', '<', newEnd)
query.add_filter('DeviceID', '=', devID)
query.order = ['created']
Trying to run the same query on the console produces
Your Datastore does not have the composite index (developer-supplied) required for this query.
error. Search showed one other person who had the same issue and he managed to fix it by changing the order of the properties in the index.yaml, but that is not helping in my case. Has anybody encountered a similar problem or could help me with the solution?
You'll need to create the exact index suggested in the error message:
- kind: Temperature
ancestor: no
properties:
- name: DeviceID
- name: created
Specifically, the first property in the index needs to be DeviceID instead of ID and the last property in the index needs to be the one you're using in the inequality filter (so you can't have Value as the last property in the index).
Related
I have an IstioOperator deployment with logs enabled in JSON format:
spec:
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: JSON
No specific accessLogFormat is defined so default one applies.
[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS%
\"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\"
\"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n
However, what i want is to add another field at the end of log by the name PATH_MAIN which is derived from original path attribute but based on same regex (regex patterns already figured out) it would alter some values, such as redacting GUIDs etc.
My question is, how can I, if possible define a new field in Log Format by giving another field as attribute and defining its value based on regex.
I'm trying to add a static value field to the ops agent without success. This is the processor I'm using:
modify_fields:
type: modify_fields
fields:
env:
static_value: somenv
Also tried:
modify_fields:
type: modify_fields
fields:
env:
default_value: somenv
I just need all the documents sent by that machine to have a "env" field with value "someenv"
The error I'm getting is "env field not found"
Thank you
The issue was this feature was not available in the version I was using (2.16), so I upgraded to 2.18 and now works.
Also, need to follow the LogEntry structure
modify_fields:
type: modify_fields
fields:
jsonPayload.env:
static_value: somenv
Also record_log_file_path property which wasn't working is working now.
In one of my deployment files, I want to set an environment variable. The variable is KUBE_VERSION and values must be fetched from a ConfigMap.
kube_1_21: 1.21.10_1550
This is part of ConfigMap where I want to set 1.21.10_1550 to KUBE_VERSION, but if the cluster is of IKS 1.20, then the key will be:
kube_1_20: 1.20.21_3456
kube_ is always static. How can I set environment variable using a regex expression?
Something of this sort:
- name: KUBE_VERSION
valueFrom:
configMapKeyRef:
name: cluster-info
key: "kube_1*"
As far as I know it is unfortunately not possible to use the regular expression as you would like. Additionally, you have information about the regular expression that validates the entered data:
regex used for validation is '[-._a-zA-Z0-9]+')
It follows that you have to enter key as an alphanumeric string and additionally you can use the characters -, _ and . So it is not possible to use regex in this place.
To workaround you can write your custom script i.e. in Bash and replace the proper line with sed command.
I am trying to execute the following query:
SELECT google_uid FROM User WHERE api_key = #api_key
But I get the error:
no matching index found. recommended index is:\n- kind: User\n
properties:\n - name: api_key\n - name: google_uid\n
Here is the index configuration from Google:
I uploaded it yesterday, so I am sure Google have had time to update it on their side.
Any idea how to solve it?
Thanks
The properties in the index are ordered. So you have an index of (google_uid, api_key), but you don't have an index of (api_key, google_uid). This query requires a composite index of (api_key, google_uid) .
You can see this if you run the querySELECT data_clicked from User where api_key = #api_key . That will work since you have an index where api_key is the first property.
I have uploaded my index.yaml file using the command line SDK:
Unfortunately I now have some of my entities indexed twice (but they are all serving):
But I am still getting a "Need Index Error" on running the page:
NeedIndexError: no matching index found. recommended index is:
- kind: RouteDetails
ancestor: yes
properties:
- name: RouteName
direction: desc
The suggested index for this query is:
- kind: RouteDetails
ancestor: yes
properties:
- name: RouteName
direction: desc
How can I get Google App Engine to recognise my entity's index?
And how do I delete the duplicates? (Do I need to?)
Datastore requires explicit indexes to be created for each query type when the query scans over more than one property or is an ancestor query. And kinds will certainly be indexed more than once if you have different query types.
For example:
SELECT * FROM RouteDetails
WHERE __key__ HAS ANCESTOR KEY(ParentKind, 'foo')
ORDER BY RouteName ASC
requires an ascending index.
- kind: RouteDetails
ancestor: yes
properties:
- name: RouteName
direction: asc
And
SELECT * FROM RouteDetails
WHERE __key__ HAS ANCESTOR KEY(ParentKind, 'foo')
ORDER BY RouteName DESC
requires a separate descending index.
- kind: RouteDetails
ancestor: yes
properties:
- name: RouteName
direction: desc
https://cloud.google.com/datastore/docs/concepts/indexes
In your case, it appears you are performing an ancestor query with a descending ORDER BY of the RouteName property and adding the suggested index to your index.yaml file should solve the problem for you.
As for the suspected "duplicates", which indexes need to exist depend on the specific queries your application performs.
But if you determine that you have extra unused indexes, the instructions for vacuuming indexes can be found here: https://cloud.google.com/datastore/docs/tools/indexconfig#Datastore_Deleting_unused_indexes
The index includes the order by direction - you can see the up arrows in the console view indicating all fields ascending.
The suggested index is a descending index on one of the properties.
Your 'duplicate' indexes have been introduced by the reason field, which you indexed as both capital r and lower case r, which are different named fields
Just in case anyone else stumbles across this question and is having a similar problem.
The reason it was looking for an index with ancestor: yes is that I was using the wrong query, and it should not have had an ancestor key in it at all.
Here is my new query:
class RouteDetails(ndb.Model):
"""Get list of routes from Datastore """
RouteName = ndb.StringProperty()
#classmethod
def query_routes(cls):
return cls.query().order(-cls.RouteName)
class RoutesPage(webapp2.RequestHandler):
def get(self):
adminLink = authenticate.get_adminlink()
authMessage = authenticate.get_authmessage()
self.output_routes(authMessage,adminLink)
def output_routes(self,authMessage,adminLink):
self.response.headers['Content-Type'] = 'text/html'
html = templates.base
html = html.replace('#title#', templates.routes_title)
html = html.replace('#authmessage#', authMessage)
html = html.replace('#adminlink#', adminLink)
self.response.out.write(html + '<ul>')
list_name = self.request.get('list_name')
#version_key = ndb.Key("List of routes", list_name or "*notitle*")
routes = RouteDetails.query_routes().fetch(20)
for route in routes:
self.response.out.write('<li>%s</li>' % route)
self.response.out.write('</ul>' + templates.footer)
I was using this page of the documentation, which doesn't tell you how to construct a query for a kind with no ancestor.
https://cloud.google.com/appengine/docs/standard/python/datastore/queries