I'm having trouble with Google's App engine indexes. When running my app via the GoogleAppEngineLauncher, the app is working fine. When deploying the app, I get the following error:
NeedIndexError: no matching index found.
The suggested index for this query is:
- kind: Bar
ancestor: yes
properties:
- name: rating
direction: desc
The error is generated after this line of code:
bars = bar_query.fetch(10)
Before the above line of code, it reads:
bar_query = Bar.query(ancestor=guestbook_key(guestbook_name)).order(-Bar.rating)
My index.yaml file contains the exact "suggested" index below # AUTOGENERATED:
- kind: Bar
ancestor: yes
properties:
- name: rating
direction: desc
Am I maybe missing something? I removed the index.yaml file and deployed the app again (via the command-line) and one less file was uploaded - so the index.yaml file is there.
Everything is working fine locally. I'm working on the latest Mac OSx. The command used for deployment was:
appcfg.py -A app-name --oauth2 update app
The datastore I implemented is loosely based on the guestbook tutorial app.
Any help would be greatly appreciated.
EDIT:
My ndb.Model is defined as follow:
class Bar(ndb.Model):
content = ndb.StringProperty(indexed=False)
lat = ndb.FloatProperty(indexed=False)
lon = ndb.FloatProperty(indexed=False)
rating = ndb.IntegerProperty(indexed=True)
url = ndb.TextProperty(indexed=False)
Check https://appengine.google.com/datastore/indexes to see if this index is present and status set to "serving". It's possible that the index is still being built.
The development environment emulates the production environment. It does not really have indexes in the Datastore sense.
Probably a little late now, but running "gcloud app deploy index.yaml" helped since running deploy by itself ignored the index.yaml file.
As others have said, the dashboard at https://appengine.google.com/datastore/indexes will be showing "pending" for a while.
I stumbled on the same issue and your comments helped me in the right direction. Here's what Google says how to handle this:
According to the Google documentation the story is that using
gcloud app deploy
the index.yaml file is not uploaded (question is why not?). Anyway, one has to upload this index file manually.
To do so, the documentation gives the following command:
gcloud datastore create-indexes index.yaml
(supposing you execute this from the same directory of the index.yaml file)
Once you have done this you can go to the Datastore console and you will see the index has been created. It will then start to be indexed (took some 5 minutes in my case) and once the index is being served you can start your application.
I fixed this issue by moving the index that the error says is missing above the auto generate line in the "index.yaml" file.
In your case the yaml file will look like:
indexes:
- kind: Bar
ancestor: yes
properties:
- name: rating
direction: desc
# AUTOGENERATED
Then all you have to do is update your app then update the indexes, you update the indexes by running the following command.
appcfg.py [options] update_indexes <directory>
With the directory being the directory relative to your index.yaml file. You should then see that index on your dashboard at https://appengine.google.com/datastore/indexes
The update will initially be "pending" but after the index says "serving" you will be able to make your query.
This NeedIndexError can be triggered by different causes, as I arrived here while having a slightly different problem, so I'll try to explain all I was doing wrong in order to show things that can be done:
I thought I have to had only one index per Kind of entity. That's not true, as long as I found you need to have as many indexes as different queries you will need to make.
While on development web server indexes are autogenerated and placed below the #AUTOGENERATED line in index.yaml file.
After modifying indexes I use first gcloud datastore indexes create index.yaml and I wait until indexes are Serving in https://console.cloud.google.com/datastore/indexes?project=your-project.
I clean unused indexes by executing gcloud datastore indexes cleanup index.yaml be aware that you do not delete indexes that are being used on production. Reference here
Be aware that if you don't specify direction on your index properties, it will be ASC by default. So if you are trying to make a - sort query it will again rise the error.
Things I think but I have not 100% evidence longer than my particular problem, but I think can help as a kind of brainstorming:
Indexes are important while querying data, not when uploading.
Creating manually the #AUTOGENERATED line not seem to be necessary if you are generating indexes manually. Reference here
As the development server updates indexes below #AUTOGENERATED line while making queries, you can "accidentally" solve your problem by adding this lane. While the real problem is a lack of manually index update using gcloud datastore indexes create index.yaml command. Reference here and here
In my case, I have uploaded the index file manually like below:
gcloud datastore indexes create "C:\Path\of\your\project\index.yaml"
Then you should confirm the update:
Configurations to update:
descriptor: [C:\Path\of\your\project\index.yaml]
type: [datastore indexes]
target project: [project_name]
Do you want to continue (Y/n)? y
Then you can go to the Datastore console to check if the the index has been created via this link:
https://console.cloud.google.com/datastore/indexes
Related
I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false
I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.
I've submitted a training job to the cloud using the RESTful API and see in the console logs that it completed successfully. In order to deploy the model and use it for predictions I have saved the final model using tf.train.Saver().save() (according to the how-to guide).
When running locally, I can find the graph files (export-* and export-*.meta) in the working directory. When running on the cloud however, I don't know where they end up. The API doesn't seem to have a parameter for specifying this, it's not in the bucket with the trainer app, and I can't find any temporary buckets on the cloud storage created by the job.
When you set up your Cloud ML environment you set up a bucket for this purpose. Have you looked in there?
https://cloud.google.com/ml/docs/how-tos/getting-set-up
Edit (for future record): As Robert mentioned in comments, you'll want to pass the output location to the job as an argument. Couple of things to be mindful of:
Use a unique output location per job, so one job doesn't clobber over the outputs of another.
The recommendation is to specify the parent output path, and use it to contain the exported model in a subpath called 'model', as well as organizing other outputs like checkpoints and summaries within that path. That makes it easier to manage all the outputs.
While not required, I'll also suggest staging the training code in a packages subpath of the output, which helps correlate the source with the outputs it produces.
Finally(!), also keep in mind when you use hyperparameter tuning, you'll need to append the trial id to the output path for outputs produced by individual runs.
The custom Lucene index on my Sitecore 6.2 Content Delivery server seems to be not right. So I think I need to rebuild all 3 of my custom indexes. How do I do that? Do I just have to use the shared source Index Viewer module? Right now I have that installed on my CD server, however for some reason it is not working. When I select my custom index in Index Viewer - nothing happens. So I can't rebuild the index that way. Can I just delete the index files from the hard drive? If so, how quickly will Lucene rebuild them?
As noted above, earlier versions of Sitecore 6.x required custom indexes to be rebuild using either IndexViewer or with some custom code. I believe in a revision of 6.5 the Control Panel > Database > Rebuild Search Indexes began including custom indexes so IndexViewer is no longer necessary (but should still work).
To your specific question though, on my CD servers I have a rebuild script that can be called directly to rebuild search indexes. I forget where I found this script (believe it was something published by Alex Shyba at Sitecore). You can find the details of this script at https://gist.github.com/Refactored/6776801
However, I believe you have a different issue that needs to be addressed. If your CD servers aren't detecting changes and therefore not updating you have a configuration issue. I would start with this article when troubleshooting index issues: http://sitecoreblog.alexshyba.com/2011/04/search-index-troubleshooting.html
I ended up contacting Sitecore support and they pointed me to the shared source module called Sitecore Support Toolbox - http://marketplace.sitecore.net/en/Modules/Sitecore_Support_Toolbox.aspx. Once I installed that I was able to easily rebuild my indexes.
Since Sitecore 6.6 update 3 or 4 (don't remember which one was it) you can rebuild your custom indexes from the Sitecore Control Panel.
In all previous versions you need to rebuild it from code or using custom modules for Sitecore. Deleting index files won't work.
The simplest code for rebuilding custom Sitecore Lucene Index is:
Sitecore.Search.SearchManager.GetIndex("your_index_name").Rebuild()
The blog post "Troubleshooting Sitecore Lucene search and indexing" can help you if rebuilding the index won't solve your problem.
I have come across the same requirement in one of my projects. Here was my solution:
Create a configuration content item with a template that has only one field, say "Rebuild Index", default value is "1", example of the item path could be: "/sitecore/content/mysite/config/index rebuild flag"
Create an IndexRebuilder class that has a Run method. Within the Run method, check the "index rebuild flag" item (from the Context database) and rebuild the index on the server if the "Rebuild Index" field value equals to "1". After rebuilt successfully, update the item field value to "0".
Set up, a scheduled agent that points to the IndexRebuilder class. For examples,
<agent type="MyAssembly.IndexRebuilder, MyAssembly" method="Run" interval="00:00:00"/>
Notice that the interval is "00:00:00" by default, to turn off the agent on content management server. Your build and deployment process should turn this value to say "00:05:00", which allow the agent to run on every 5 minutes.
From there, to rebuild index on content delivery server, just publish the "index rebuild flag" item from master database to the content delivery database (web) and the index on your content delivery server should start rebuilding in 5 minutes.
Clicking Index Viewer with nothing happening, is usually an indication of certain files of the Index Viewer package having not been deployed to your CD server. Easiest fix for this - if you do have /sitecore running on the CD server - is to just re-install the package directly on the CD server. After this, IndexViewer will work.
If you don't have a /sitecore on your CD server (Sitecore recommends removing this, or at least blocking access to it) - it becomes more problematic. I would recommend setting a page/webservice or similar, executing the code Maras suggests above - that way you can always trigger an index rebuild when you need it.