I am using aws elastic service and indexed 650 000 data.
I need to add two new fields to the already indexed documents.
When I tried to call the updateByQuery function got the error, 'scripts of type [inline], operation [update] and lang [groovy] are disabled'.
I have fixed it by adding
script.engine.groovy.inline.aggs: on
script.engine.groovy.inline.update: on on elasticsearch.yml and it works perfectly on local .
How can I add this configuration on aws es ?
I am getting the same error when I am updating documents in aws elastic service.
Here is my code. I want to update all records ( where "device"= deviceVal) by adding new fields Site and Time.
var site = 'some value';
var deviceVal = '123';
var theScript = {
"inline": "ctx._source.Site = '"+ site + "';ctx._source.Time = '"+ new Date().getTime() + "'"
}
var match = {
"match": { "device": deviceVal }
}
client.updateByQuery({
index: 'my_index',
type:'txt',
"body": {
"query": match,
"script":theScript
}
}, function (error, response) {
// console.log("success")
console.log('error--',error)
console.log('response--',response)
});
Building on the other answer where we use logstash to reindex into an AWS ES cluster, you simply need to add one more transformation where # add other transformations here is mentioned.
In your case the input part needs to contain a query for the device:
input {
elasticsearch {
hosts => ["my-elasticsearch-domain.us-west-2.es.amazonaws.com:80"]
index => "my_index"
query => '{"query": {"match":{"device": "123"}}}'
docinfo => true
}
}
And the filter part would boil down to this, i.e. we rename the #timestamp field and add the Site field:
filter {
mutate {
remove_field => [ "#version" ]
rename => { "#timestamp" => "Time" }
add_field => { "Site" => "some value" }
}
}
Related
I am facing 3 INVALID_ARGUMENT: Document bytes or path is required error while using Google Cloud Document AI with gs://file URI.
Minimal implementation on node.js 12
const documentai = require('#google-cloud/documentai').v1;
function processingDocument(params) {
return new Promise((resolve, reject) => {
let options = {
credentials: {
client_email: client_email,
private_key: private_key,
},
projectId: project_id
};
const client = new documentai.DocumentProcessorServiceClient(options);
client.processDocument(params, function(error, data) {
if (error) {
reject(error);
};
return resolve(true); // Testing only
})
})
};
params that works:
const params = {
"name":"projects/project_name/locations/us/processors/xxxx",
"rawDocument":{
"mimeType":"image/png",
"content":"iV....=" //b64 content
}
}
params that does not work
const params = {
"name":"projects/project_name/locations/us/processors/xxxx",
"inlineDocument":{
"mimeType":"image/png",
"uri":"gs://bucket_name/demo-assets/file.png"
}
}
I thought about a permission error, I checked whether Document AI required explicit permission to Google Storage, apparently not.
Tried also with a more elaborated payload
const params = {
"name":"projects/project_name/locations/us/processors/xxxx",
"inlineDocument":{
"mimeType":"image/png",
"textStyles":[
],
"pages":[
],
"entities":[
],
"entityRelations":[
],
"textChanges":[
],
"revisions":[
],
"uri":"gs://bucket_name/demo-assets/file.png"
}
}
Unfortunately, I am stuck. Any idea what is happening?
The uri field cannot currently be used for processing a document.
You are currently using Online Processing, which only supports local files.
If you want to process documents stored in Google Cloud Storage, you will need to use Batch Processing following the examples provided on this page.
https://cloud.google.com/document-ai/docs/send-request#batch-process
The GCS Input URI must be provided in the BatchDocumentsInputConfig gcsPrefix or gcsDocuments field.
I use react native with graphQL.
I want to be able to upload multiple photos in one post in my app.
What I made is this.
As you can see, file (where uploaded files is stored) has one string.
This one row means one post in my app.
And my prisma schema is also set as String in file column.
model Photo {
id Int #id #default(autoincrement())
user User #relation(fields: [userId], references: [id])
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
*file String
}
But I want to upload multiple photos in one post which means one row in DB.
So my idea is to make my DB looks like below.
File column is made to be Array of String in one line.
In order to make this, I make my schema's file String[].
model Photo {
id Int #id #default(autoincrement())
user User #relation(fields: [userId], references: [id])
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
*file String[]
}
And I send Upload data from Front as below using map.
const onValid = async ({ caption }) => {
const file = await Promise.all(
selectPhoto.map(
(sp, index) =>
new ReactNativeFile({
uri: sp,
name: `${index}.jpg`,
type: "image/jpeg",
})
)
);
await uploadPhotoMutation({
variables: {
caption,
file,
},
});
};
Then below Array data goes to backend.
Array [
ReactNativeFile {
"name": "0.jpg",
"type": "image/jpeg",
"uri": "file:///storage/emulated/0/DCIM/Screenshots/Screenshot_20220223-011625_KakaoTalk.jpg",
},
ReactNativeFile {
"name": "1.jpg",
"type": "image/jpeg",
"uri": "file:///storage/emulated/0/DCIM/Camera/20220222_161411.jpg",
},
]
Then I upload this data to aws in order to get Url.
export const uploadSingleFileToS3 = async (file, userId, folderName) => {
const { filename, createReadStream } = await file;
const readStream = createReadStream();
const objectName = `${folderName}/${userId}-${Date.now()}-${filename}`;
const { Location } = new AWS.S3()
.upload({
Bucket: "chungchunonuploads",
Key: objectName,
ACL: "public-read",
Body: readStream,
})
.promise();
return Location;
};
export const uploadFileToS3 = async (filesToUpload, userId, folderName) => {
const uploadPromises = filesToUpload.map((file) => {
uploadSingleFileToS3(file, userId, folderName);
});
return Promise.all(uploadPromises);
};
However, I can't even check if my code is right or not.
Because I keep facing this error.
I think it's not just Network issue.
Because all other functions of app is working well.
So I guess some error of backend cause this.
even I can't figure out any little clue from this error message ^^;
I need many opinions from many people.
If you have any advise please tell me..
We are using apollo-client in a react project. We made a cursor level on top of any list queries. For example:
query MediaList($mediaIds: [ID!], $type: [MediaType!], $userId: ID!) {
user {
id
medias_cursor(all_medias: true, active: true, ids: $mediaIds) {
medias {
id
type
category
name
}
}
}
}
Now for different MediaList query, the Media Objects might already exist in cache but we can not use it to skip network query. For example:
After we query medias_cursor({"all_medias":true,"active":true,"ids":["361","362","363"]}),
we've already got the three Media objects here - (Media:361, Media:362, Media:363).
So when we try to query medias_cursor({"all_medias":true,"active":true,"ids":["361","363"]}, we should have everything we need in the cache already. But right now, the apollo default behavior will just pass the cache and hit the network.
We tried to add a cacheRedirects config to solve this problem like this:
const cache = new InMemoryCache({
cacheRedirects: {
User: {
medias_cursor: (_, { ids }, { getCacheKey }) => {
if (!ids) return undefined
return {
medias: map(ids, id => {
return getCacheKey({ __typename: 'Media', id: id })
})
}
},
},
},
})
We are expecting that the cacheRedirects would help us to use the cache when it's available, but now it will skip the cache anyway.
I have a problem with falowing situation:
Model 1: Guest - props {"slug":"string"}
Model 2: Project - props {"prefix":"string"}
Relation: Project has many guests
How to write remote method: findGuestWithProject(prefix, slug) that will return guest with slug (exact match but case insensitive) and related project with exact prefix?
Problems I encountered:
Initial filter return Guests with similar but not exact slug f.e. if I pass "anna" .find could return guests with slug "anna-maria", so later on I need to check id slug is exactly the same.
Initial filter return Guests with different project.prefix so I need to do extra loop to find exact match.
I need to count iteration to return callback if not match found.
Guest.getGuestProject = function(prefix, slug, cb) {
if (!prefix) return;
var pattern = new RegExp(slug, "i");
app.models.Project.findOne({
"where": {"prefix": prefix}
},(err, project) => {
if (err) { throw err};
if (!project) cb(null, null);
return project.guests({
"where": {"slug": pattern },
"include": {"relation": "project", "scope": {"include": {"relation": "rsvps"}}}
}, (err, guests) => {
if (guests.length === 0) cb(null, null)
guests.forEach(guest => {
if (guest.slug.toLowerCase() === slug.toLowerCase()) {
cb(null, guest)
}
})
})
})
Regarding 1: Your regexp is checking for anything containing slug
For 2 and 3 I've just rewritten it. You haven't specified what db connector you are using (mongodb, mysql, postgres, etc) so I've written this example based on Postgresql, which is the one I usually use and one of the worst-case-scenarios, given that relational databases don't support filtering by nested properties. If you are using either Mongodb or Cloudant take a look at the example provided in https://loopback.io/doc/en/lb3/Querying-data.html#filtering-nested-properties because this snippet could be simpler.
If this answer is not what you were looking for then I'll probably need more details. I'm also using promises instead of callbacks.
Guest.getGuestProject = function(prefix, slug) {
const Project = Guest.app.models.Project;
// First of all find projects with the given prefix
return Project.find({
where: {
prefix: prefix
},
include: 'guests'
}).then(projects => {
projects.forEach(project => {
let guests = project.guests();
guests.forEach(guest => {
// See if guest.slug matches (case-insensitive)
if (guest.slug.match(new RegExp(slug, 'i'))) {
return guest;
}
});
});
});
};
I'm using this Node API JSON, which returns Customers, their instances, and the instance versions.
Customers.find({
"include": {
"relation": "instances",
"scope": {
"include": {
"relation": "versions"
}
}
}
});
I would like to exclude all customers which do not have any related instances, in the result JSON, there is an "instances" entry with empty [ ]. however when I try to use this in a "where" I get a server error... any ideas, or am I going about this the wrong way?
If you're using MongoDB as your database, then you could add a where property to your filter at the same level as your first include, like:
var filter = {
where: {
relationId: {
exists: false
}
},
include: {...}
};
Customers.find(filter, function ( err, results ) {...});
See issue #1009 for details/updates re: database implementation.
Alternatively, you could just filter the results with Lodash:
var customersWithInstances = _.filter( customers, function ( customer )
{
return customer.instanceId;
});