Zend InputFilter: - web-services

I have 2 objects $object1 and $object2. Each object have firstname and lastname like this:
{
"fistename": "string",
"lastename": "string",
}
In my input of Web Service (WS), I have an array of this 2 objects:
[object1, $object2]
OR
[
{
"fistename": "string",
"lastename": "string",
},
{
"fistename": "string",
"lastename": "string",
}
]
How can I develop this with Zend InputFilter.
Thank you

I use \Zend\InputFilter\CollectionInputFilter.
$data = new \Zend\InputFilter\CollectionInputFilter();
$x->setInputFilter($data);
$this->add($x);

Related

How to read solidity function with tuple data in ether-js

When I call the getLandById function in remix it gives the desired result as you can see in the screenshot
Screeshot of remix IDE to get landIdbyId
When I call the same function using ether js. it will gives the output like this :
[ '0.007062190', '-0.01878356', '\x00\x00', [Getter] ]
Instead of
[ '0.007062190', '-0.01878356', '-0.019048060716011,0.007015231577652,-0.018794627684582,0.007386060761845,-0.018423798497481,0.007132627732498,-0.018677231528875,0.006761798548127,-0.019048060716011,0.007015231577652']
I'm struggling to understand how to get the data. Any idea on what should I do to get the desired result? An example would be great.
Ether-js code:
let customHttpProvider = new ethers.providers.JsonRpcProvider(API_URL);
const contract = new ethers.Contract(
Contract_Address,
contractAbi,
customHttpProvider
);
//Calling readOnly Method
async function getLand() {
const getLandById = await contract.getLandById("502");
console.log("Land-Info", getLandById);
}
getLand();
Contract ABIs:
{
"inputs": [
{
"internalType": "uint256",
"name": "landId",
"type": "uint256"
}
],
"name": "getLandById",
"outputs": [
{
"internalType": "string",
"name": "",
"type": "string"
},
{
"internalType": "string",
"name": "",
"type": "string"
},
{
"internalType": "string",
"name": "",
"type": "string"
},
{
"components": [
{
"internalType": "string",
"name": "longitude",
"type": "string"
},
{
"internalType": "string",
"name": "latitude",
"type": "string"
}
],
"internalType": "struct LandContract.PolygonCoordinates[]",
"name": "",
"type": "tuple[]"
}
],
"stateMutability": "view",
"type": "function"
},
Solidity code:
function getLandById(uint landId)
public
view
returns (
string memory,
string memory,
PolygonCoordinates[] memory
)
{
if (!_exists(landId)) {
revert IdNotExist();
}
PolygonCoordinates[] memory coordinates = new PolygonCoordinates[](
land[landId].polygonCoordinates.length
);
for (uint i = 0; i < land[landId].polygonCoordinates.length; ) {
coordinates[i].longitude = land[landId]
.polygonCoordinates[i]
.longitude;
coordinates[i].latitude = land[landId]
.polygonCoordinates[i]
.latitude;
unchecked {
i++;
}
}
return (land[landId].longitude, land[landId].latitude, coordinates);
}
The contract ABI defines 4 output values but your solidity code getLandById() returns only 3 values.

How to delete a column in BigQuery that is part of a nested column

I want to delete a column in a BigQuery table that is part of a record or nested column. I've found this command in their documentation. Unfortunately, this command is not available for nested columns inside existing RECORD fields.
Is there any workaround for this?
For example, if I had this schema I want to remove the address2 field inside the address field. So from this:
[
{
"name": "name",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "addresses",
"type": "RECORD",
"mode": "REPEATED",
"fields": [
{
"name": "address1",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "address2",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "country",
"type": "STRING",
"mode": "NULLABLE"
}
]
}
]
to this:
[
{
"name": "name",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "addresses",
"type": "RECORD",
"mode": "REPEATED",
"fields": [
{
"name": "address1",
"type": "STRING",
"mode": "NULLABLE"
},
{
"name": "country",
"type": "STRING",
"mode": "NULLABLE"
}
]
}
]
Use below
select * replace(
array(select as struct * except(address2) from t.addresses)
as addresses)
from `project.dataset.table` t
if you want permanently remove that field - use create or replace table as in below example
create or replace table `project.dataset.new_table` as
select * replace(
array(select as struct * except(address2) from t.addresses)
as addresses)
from `project.dataset.table` t

LoopBackJs REST API Create response not returning full model, only form data

when I POST to api/testmodel using an object with only the required fields, the object is being created correctly in the DB. However, I only get the object I sent in the request body. I'm trying to get the full object with null fields in the response.
Thanks for the help!
{
"name": "test",
"plural": "test",
"base": "PersistedModel",
"idInjection": true,
"replaceOnPUT": false,
"properties": {
"city": {
"type": "string",
"length": 100
},
"name": {
"type": "string",
"required": true,
"length": 100
},
"id": {
"type": "string",
"id": true,
"required": true,
},
"officePhone": {
"type": "string",
"length": 100
},
"status": {
"type": "string",
"required": false,
"length": 200
},
"street": {
"type": "string",
"length": 100
}
},
"methods": {}`
Then you need to create default values for your model, for example city:
"properties": {
"city": {
"type": "string",
"length": 100,
"default": ""
},
...
In your controller, after you have created your new record and have the record ID, perform a findById query and return that object instead of the object returned from create. This should give you a response similar to a GET route.

Aggregation by a compound field (copy_to) not working on Elasticsearch

I have an index in Elasticsearch (v 1.5.0) that has a mapping that looks like this:
{
"storedash": {
"mappings": {
"outofstock": {
"_ttl": {
"enabled": true,
"default": 1296000000
},
"properties": {
"CompositeSKUProductId": {
"type": "string"
},
"Hosts": {
"type": "nested",
"properties": {
"HostName": {
"type": "string"
},
"SKUs": {
"type": "nested",
"properties": {
"CompositeSKUProductId": {
"type": "string",
"index": "not_analyzed"
},
"Count": {
"type": "long"
},
"ProductId": {
"type": "string",
"index": "not_analyzed",
"copy_to": [
"CompositeSKUProductId"
]
},
"SKU": {
"type": "string",
"index": "not_analyzed",
"copy_to": [
"CompositeSKUProductId"
]
}
}
}
}
},
"Timestamp": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
}
Look at how field CompositeSKUProductId is created as a composition of both the SKU and ProductId fields.
I now want to perform an aggregation on that composite field, but it doesn't seem to work; the relevant part of my query looks like this:
"aggs": {
"hostEspecifico": {
"filter": {
"term": { "Hosts.HostName": "www.example.com"}
},
"aggs": {
"skus": {
"nested": {
"path": "Hosts.SKUs"
},
"aggs": {
"valores": {
"terms": {
"field": "Hosts.SKUs.CompositeSKUProductId", "order": { "media": "desc" }, "size": 100 },
"aggs": {
"media": {
"avg": {
"field": "Hosts.SKUs.Count"
}
}
}
}
}
}
}
}
}
Thing is, this aggregation returned zero buckets, as though it weren't even there.
I checked that the very same query works if only I change CompositeSKUProductId by another field like ProductId.
Any ideas as to what I can do to solve my problem?
N.B.: I'm using the AWS Elasticsearch Service, which does not allow scripting.
The problem here is that you have misunderstood the concept of copy_to functionality. It simply copies the field values of various fields and does not combine the way you would expect.
If SKU is 123 and product id is 456 then composite field will have them as separate values and not 123 456. You can verify this by querying your field.
You would have to do this on server side, ideally with script but it is not allowed. Personally we used AWS ES service but faced multiple problems, major being not able to change elasticsearch.yml file and not able to use scripts. You might want to look at Found.
Hope this helps!
In order to copy_to another field in the nested doc, you need to supply the full path to the field you want to copy to in your mapping. You have only provided "CompositeSKUProductId", which causes the data to be copied to a field in your root document, instead of your nested SKUs type document.
Try updating your mapping for your "SKUs" type to copy_to the fully qualified field "Hosts.SKUs.CompositeSKUProductId" instead.
Like this:
{
"storedash": {
"mappings": {
"outofstock": {
"_ttl": {
"enabled": true,
"default": 1296000000
},
"properties": {
"CompositeSKUProductId": {
"type": "string"
},
"Hosts": {
"type": "nested",
"properties": {
"HostName": {
"type": "string"
},
"SKUs": {
"type": "nested",
"properties": {
"CompositeSKUProductId": {
"type": "string",
"index": "not_analyzed"
},
"Count": {
"type": "long"
},
"ProductId": {
"type": "string",
"index": "not_analyzed",
"copy_to": [
"Hosts.SKUs.CompositeSKUProductId"
]
},
"SKU": {
"type": "string",
"index": "not_analyzed",
"copy_to": [
"Hosts.SKUs.CompositeSKUProductId"
]
}
}
}
}
},
"Timestamp": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
}
You may find this discussion helpful, when a similar issue was opened on github.

elasticsearch - add a constant mapping definition to a specific type in the mapping

I have a static mapping json that contains many entities.
for instance
{
"settings": {},
"mappings": {
"MyEntity": {
"properties": {
"date": {
"type": "date",
"format": "dateOptionalTime"
},
"name": {
"type": "string",
},
"tweet": {
"type": "string"
},
"user_id": {
"type": "long"
}
}
}
}
}
Where "MyEntity" is an example of one of many entities.
What I want is that every time an entity has the value:
"name": {
"type": "string",
},
this will be added:
"name": {
"type": "string",
"analyzer": "mm_name_analyzer",
"fields": {
"lc": {
"type": "string",
"analyzer": "case_insensitive_sort"
},
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
I don't want to add it to each entity field that is defined as string.
Is there a way to do it?
Here replace indexName with the index of your purpose or give an index name pattern
You can apply mapping to "__default_" which will make sure that the mapping is applied to all the types under the indeices the dynamic mapping is applied to.
curl -XPUT localhost:9200/_template/nameTemplate -d '{
"template": "indexName",
"mappings": {
"_default_": {
"dynamic_templates": [
{
"name_field": {
"match": "name",
"match_mapping_type": "string",
"mapping": {
"type": "string",
"analyzer": "mm_name_analyzer",
"fields": {
"lc": {
"type": "string",
"analyzer": "case_insensitive_sort"
},
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
]
}
}
}'