AWS Greengrass v2 - Lambda function access to local resources - amazon-web-services

Hope you're all doing as good as possible during those covid times.
Overview
I have a lambda function that runs on a raspberry device with Greengrass version 1. This lambda access my USB port that has an XBee on it (/dev/ttyUSB0) and sends this data to an MQTT on IoT Core and it is working for some months. It functions this way: My GGC receives 5 packages every 5 minutes from a remote station that has some sensors and after unpacking this data, it sends it as a JSON through MQTT.
I'm currently trying to update my GGC_v1 to GGC_v2 and am facing a problem when deploying it. I'm not able to access the local resource on version two when running the same lambda function, even though the recipe has access for reading and writing on the device.
On GGC_V1 is uses the configuration below:
Make this function long-lived and keep it running indefinitely
Use group default (currently: Greengrass container)
Use group default (currently: ggc_user/ggc_group)
Also added access to resource /dev/ttyUSB0.
Problem Log:
2021-07-13T20:07:22.890Z [INFO] (pool-2-thread-58) com.weatherStation.XBee: Finding mounted cgroups.. {serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=STARTING}
2021-07-13T20:07:22.909Z [INFO] (Copier) com.weatherStation.XBee: Startup script exited. {exitCode=1, serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=STARTING}
2021-07-13T20:07:22.915Z [INFO] (pool-2-thread-53) com.weatherStation.XBee: shell-runner-start. {scriptName=services.com.weatherStation.XBee.lifecycle.shutdown.script, serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=BROKEN, command=["/greengrass/v2/packages/artifacts/aws.greengrass.LambdaLauncher/2.0.7/lambda-l..."]}
2021-07-13T20:07:23.102Z [WARN] (Copier) com.weatherStation.XBee: stderr. 2021/07/13 17:07:23 could not read process state file /greengrass/v2/work/com.weatherStation.XBee/work/worker/0/state.json: open /greengrass/v2/work/com.weatherStation.XBee/work/worker/0/state.json: no such file or directory. {scriptName=services.com.weatherStation.XBee.lifecycle.shutdown.script, serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=BROKEN}
2021-07-13T20:07:23.220Z [ERROR] (pool-2-thread-60) com.weatherStation.XBee: error while removing dir {"path": "/greengrass/v2/work/com.weatherStation.XBee/work/worker/0", "errorString": "unlinkat /greengrass/v2/work/com.weatherStation.XBee/work/worker/0/overlays: device or resource busy"}. {serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=BROKEN}
Recipe:
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.weatherStation.XBee",
"ComponentVersion": "5.0.2",
"ComponentType": "aws.greengrass.lambda",
"ComponentDescription": "",
"ComponentPublisher": "AWS Lambda",
"ComponentSource": "arn:aws:lambda:region:account_id:function:Greengrass_WeatherStation",
"ComponentConfiguration": {
"DefaultConfiguration": {
"lambdaExecutionParameters": {
"EnvironmentVariables": {}
},
"containerParams": {
"memorySize": 16000,
"mountROSysfs": false,
"volumes": {},
"devices": {
"0": {
"path": "/dev/ttyUSB0",
"permission": "rw",
"addGroupOwner": true
}
}
},
"containerMode": "GreengrassContainer",
"timeoutInSeconds": 15,
"maxInstancesCount": 100,
"inputPayloadEncodingType": "json",
"maxQueueSize": 1000,
"pinned": true,
"maxIdleTimeInSeconds": 60,
"statusTimeoutInSeconds": 60,
"pubsubTopics": {
"0": {
"topic": "ggc/weather_station/data",
"type": "IOT_CORE"
}
}
}
},
"ComponentDependencies": {
"aws.greengrass.LambdaLauncher": {
"VersionRequirement": ">=2.0.0 <3.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.TokenExchangeService": {
"VersionRequirement": ">=2.0.0 <3.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.LambdaRuntimes": {
"VersionRequirement": ">=2.0.0 <3.0.0",
"DependencyType": "SOFT"
}
},
"Manifests": [
{
"Platform": {
"os": "linux",
"architecture": "arm"
},
"Lifecycle": {},
"Artifacts": [
{
"Uri": "greengrass:lambda-artifact.zip",
"Digest": "GVgaQlVuSYmfgbwoStd5dfB9WamdQgrhbE72s2fF04ysno=",
"Algorithm": "SHA-256",
"Unarchive": "ZIP",
"Permission": {
"Read": "OWNER",
"Execute": "NONE"
}
}
]
}
],
"Lifecycle": {
"startup": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher start"
},
"setenv": {
"AWS_GREENGRASS_LAMBDA_CONTAINER_MODE": "{configuration:/containerMode}",
"AWS_GREENGRASS_LAMBDA_ARN": "arn:aws:lambda:region:account_id:function:Greengrass_WeatherStation:5",
"AWS_GREENGRASS_LAMBDA_FUNCTION_HANDLER": "main.weather_handler",
"AWS_GREENGRASS_LAMBDA_ARTIFACT_PATH": "{artifacts:decompressedPath}/lambda-artifact",
"AWS_GREENGRASS_LAMBDA_CONTAINER_PARAMS": "{configuration:/containerParams}",
"AWS_GREENGRASS_LAMBDA_STATUS_TIMEOUT_SECONDS": "{configuration:/statusTimeoutInSeconds}",
"AWS_GREENGRASS_LAMBDA_ENCODING_TYPE": "{configuration:/inputPayloadEncodingType}",
"AWS_GREENGRASS_LAMBDA_PARAMS": "{configuration:/lambdaExecutionParameters}",
"AWS_GREENGRASS_LAMBDA_RUNTIME_PATH": "{aws.greengrass.LambdaRuntimes:artifacts:decompressedPath}/runtime/",
"AWS_GREENGRASS_LAMBDA_EXEC_ARGS": "[\"python3.7\",\"-u\",\"/runtime/python/lambda_runtime.py\",\"--handler=main.weather_handler\"]",
"AWS_GREENGRASS_LAMBDA_RUNTIME": "python3.7"
},
"shutdown": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher stop; {aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher clean"
}
}
}

Related

AWS Step Function Error with Input to Map State

I have the following iteration state defined in a Map State:
"WriteRteToDB": {
"Comment": "Write Rte to DB. Also records the risk calculations in the same table.",
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"End": true,
"Parameters": {
"FunctionName": "logger-lambda",
"RtInfo.$": "States.Array($)",
"ExecutionId.$": "$$.Execution.Id",
"InitTime.$": "$$.Execution.StartTime"
}
The parameters defined produce the following input:
{
"FunctionName": "logger-lambda",
"RtInfo": {
"status": 200,
"rte": {
"date": "2022-06-05 00:00:00",
"rt_value": 778129128.6631782,
"lower_80": 0,
"upper_80": 0.5,
"location_id": "WeWork Office Space & Coworking, Town Square, Alpharetta, GA, USA",
"syndrome": "Gastrointestinal"
}
},
"InitTime": "2022-06-05T15:04:57.297Z",
"ExecutionId": "arn:aws:states:us-east-1:1xxxxxxxxxx1:execution:RadaRx-rteForecast:0dbf2743-abb5-e0b6-56d0-2cc82a24e3b4"
}
But the following Error is produced:
{
"error": "States.Runtime",
"cause": "An error occurred while executing the state 'WriteRteToDB' (entered at the event id #28). The Parameters '{\"FunctionName\":\"logger-lambda\",\"RtInfo\":[{\"status\":200,\"rte\":{\"date\":\"2022-12-10 00:00:00\",\"rt_value\":1.3579795204795204,\"lower_80\":0,\"upper_80\":0.5,\"location_id\":\"Atlanta Tech Park, Technology Parkway, Peachtree Corners, GA, USA\",\"syndrome\":\"Influenza Like Illnesses\"}}],\"InitTime\":\"2022-06-05T16:06:10.132Z\",\"ExecutionId\":\"arn:aws:states:us-east-1:1xxxxxxxxxx1:execution:RadaRx-rteForecast:016a37f2-d01c-9bfd-dc3f-1288fb7c1af6\"}' could not be used to start the Task: [The field \"RtInfo\" is not supported by Step Functions]"
}
I have already tried wrapping the RtInfo inside an array of length 1 as you can observe from above, considering that it is a state within the Map State. I have also checked Input size to make sure that it does not cross the Max Input/Output quota of 256KB.
Your task's Parameters has incorrect syntax. Pass RtInfo and the other user-defined inputs under the Payload key:
"Parameters": {
"FunctionName": "logger-lambda",
"Payload": {
"RtInfo.$": "States.Array($)",
"ExecutionId.$": "$$.Execution.Id",
"InitTime.$": "$$.Execution.StartTime"
}
}

Appflow upsert error : ID does not exist in the destination connector

Creating a appflow from S3 bucket to salesforce through CDK with upsert option.
Using existing connection to From S3 to Salesforce -
new appflow.CfnConnectorProfile(this, 'Connector',{
"connectionMode": "Public",
"connectorProfileName":"connection_name",
"connectorType":"Salesforce"
})
Destination flow Code -
new appflow.CfnFlow(this, 'Flow', {
destinationFlowConfigList: [
{
"connectorProfileName": "connection_name",
"connectorType": "Salesforce",
"destinationConnectorProperties": {
"salesforce": {
"errorHandlingConfig": {
"bucketName": "bucket-name",
"bucketPrefix": "subfolder",
},
"idFieldNames": [
"ID"
],
"object": "object_name",
"writeOperationType": "UPSERT"
}
}
}
],
..... other props ....
}
tasks: [
{
"taskType":"Filter",
"sourceFields": [
"ID",
"Some other fields",
...
],
"connectorOperator": {
"salesforce": "PROJECTION"
}
},
{
"taskType":"Map",
"sourceFields": [
"ID"
],
"taskProperties": [
{
"key":"SOURCE_DATA_TYPE",
"value":"Text"
},
{
"key":"DESTINATION_DATA_TYPE",
"value":"Text"
}
],
"destinationField": "ID",
"connectorOperator": {
"salesforce":"PROJECTION"
}
},
{
.... some other mapping fields.....
}
But the problem is - "Invalid request provided: AWS::AppFlow::FlowCreate Flow request failed: [ID does not exist in the destination conne ctor]
According to the error, how to fix the problem with the existing connector which results in ID does not exist in the destination connector
PS: ID is defined in the flow code. But still it is saying ID is not found.
I think your last connector operator should be:
"connectorOperator": {
"salesforce":"NO_OP"
}
instead of:
"connectorOperator": {
"salesforce":"PROJECTION"
}
since you are mapping the field ID into itself without any transformations whatsoever.

etcdctl throws Error: context deadline exceeded error

I'm trying to create a one node etcd cluster on AWS using coreos cloud-config. I have created a Route53 recordset with value etcd.uday.com which has a alias to the ELB which points to the ec2 instance. Etcd is running successfully but when I run the etcd member list command I get below error
ETCDCTL_API=3 etcdctl member list \
--endpoints=https://etcd.udayvishwakarma.com:2379 \
--cacert=./ca.pem \
--cert=etcd-client.pem \
--key=etcd-client-key.pem
Error: context deadline exceeded
However, it lists members when --insecure-skip-tls-verify flag is added to the etcdctl member list command. I have generated certificated using cfssl using below configs
ca.json
{
"CN": "Root CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "UK",
"L": "London",
"O": "Kubernetes",
"OU": "CA"
}
],
"ca": {
"expiry": "87658h"
}
}
ca.config
{
"signing": {
"default": {
"expiry": "2190h"
},
"profiles": {
"client": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"server": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"peer": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
},
"ca": {
"usages": [
"signing",
"digital signature",
"cert sign",
"crl sign"
],
"expiry": "26280h",
"is_ca": true
}
}
}
}
etcd-member.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts":[
"etcd.uday.com"
],
"names": [
{
"O": "Kubernetes"
}
]
}
etcd-client.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts":[
"etcd.uday.com"
],
"names": [
{
"O": "Kubernetes"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname="etcd.uday.com" \
-config=ca-config.json -profile=peer \
etcd-member.json | cfssljson -bare etcd-member
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname="etcd.uday.com" \
-config=ca-config.json -profile=client\
etcd-client.json | cfssljson -bare etcd-client
My etcd-member.service systemd unit cloudconfig is as below
units:
- name: etcd-member.service
drop-ins:
- name: aws-etcd-cluster.conf
content: |
[Service]
Environment=ETCD_USER=etcd
Environment=ETCD_NAME=%H
Environment=ETCD_IMAGE_TAG=v3.1.12
Environment=ETCD_SSL_DIR=/etc/etcd/ssl
Environment=ETCD_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/certs/etcd-client.pem
Environment=ETCD_KEY_FILE=/etc/ssl/certs/etcd-client-key.pem
Environment=ETCD_CLIENT_CERT_AUTH=true
Environment=ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_PEER_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd-member.pem
Environment=ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd-member-key.pem
Environment=ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_INITIAL_CLUSTER_STATE=new
Environment=ETCD_INITIAL_CLUSTER=%H=https://%H:2380
Environment=ETCD_DATA_DIR=/var/lib/etcd3
Environment=ETCD_LISTEN_CLIENT_URLS=https://%H:2379,https://127.0.0.1:2379
Environment=ETCD_ADVERTISE_CLIENT_URLS=https://%H:2379
Environment=ETCD_LISTEN_PEER_URLS=https://%H:2380
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=https://%H:2380
PermissionsStartOnly=true
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/lib/coreos/etcd-member-wrapper.uuid"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/lib/coreos/etcd-member-wrapper.uuid
ExecStartPre=/usr/bin/sed -i 's/^ETCDCTL_ENDPOINT.*$/ETCDCTL_ENDPOINT=https:\/\/%H:2379/' /etc/environment
ExecStartPre=/usr/bin/mkdir -p /var/lib/etcd3
ExecStartPre=/usr/bin/chown -R etcd:etcd /var/lib/etcd3
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/lib/coreos/etcd-member-wrapper.uuid
enable: true
command: start
Is cert generation wrong or something I have missed?
The certificates are generated for etcd.uday.com.
You are trying to connect using etcd.udayvishwakarma.com while certificate is valid for etcd.uday.com.
Change endpoint on etcdctl from etcd.udayvishwakarma.com to etcd.uday.com.
I ran into the same problem today, probably this is not going to be useful for you now, but it will for anybody who runs into the same problem in the future.
I think you might be missing
etcd.udayvishwakarma.com
from your cert in
--cert=etcd-client.pem
To verify that etcd.udayvishwakarma.com exists in your cert, you can run:
openssl x509 -in etcd-client.pem -text
and you should be able to see it under X509v3 Subject Alternative Name. If you don't, you will probably need to recreate the certificate adding that DNS name.
while i am running .\etcdctl.exe put key value, i get this error
Error: context deadline exceeded
befor running etcdctl.exe you should running etcd.exe first.
in my case it's working.

How to decrypt an encrypted payload in hyperledger?

I am currently using Hyperledger Fabric. I am using the REST API to make GET request as so:
curl 172.18.0.3:7050/chain/blocks/31
And the output I am getting back is :
{
"transactions": [
{
"type": 2,
"chaincodeID": "BMBQHHg2y0RnadYEaZZT8icjMvZbDPjkn5mFb+clFORxJqz8qsMs/QlalCT+A3msuc59KYM5sbZyhM3OeSplTWo91WAHTUgqIKVrm1gUzsouBIqLNvpqgimN36+s0ywF0Rx4gn27RmQYBbB+877Nh+w7A8Ezz92T1MgHcmzfRgVaDmiN0ga+jAfufNYglmeM4ZSysmSsz6xJtrcD5mTmHXZtvtw6uGCI1TCOMBaWTpLhNHfM2/5EB5jatdMjDi1GAlaXkDWcLgGjScL1yZpWcntz/N0cT90r6i9ycXZ0kk9wodBq2cFutDTdkl8S90kzd0gXig==",
"payload":"BBYZD6S/hRILcf21zVbhMAhA+qLQvAq+KvOBuXOknPCAMjas2LI9f42AKG6r+uWP71LYEkbo1XXANuDmukZjDsFGltzoIfq+Mry5n/CNXzXgiVLX0J7z08kGfEfw2vnywgmVFX4UtKPpl8pMTmRxJWn5Q0HY1pFnA6ZaXluoLRf7f17Ko4SPahi19k2NszcJ0SHE7xRllfLXZJxaOlT2J56nqjTBKTJ86bdqn6AdQXHA6Px7yz5XpgJhccyecaLS4sYcsrqHoOlO+kk+bw5Q6qnkHfIIhLXCEgHxKoT00L8I8B2luO1RlmQd4mNfXb7GrLOJXvCNPrcpSEmQDByEGwn1j3Zy0lilwKVaNYTPNThMwQ==",
"txid": "72bd2ab7-f769-49c9-a754-c7be0c481cf0",
"timestamp": {
"seconds": 1496062124,
"nanos": 474977395
},
"confidentialityLevel": 1,
"confidentialityProtocolVersion": "1.2",
"nonce": "2YgU+0WYPuTKGsKkT1hx7McOURPTIRgG",
"toValidators": "BJWJi5aSycSaJBaLIciUxlhZNyRsW6es2pO7ljUmqxP2SLzgUJtDtAeG8S5SMq+RQ9iX9m8+HIUocrD2J1MBTJaxPWcs/dYFNp1zi8k1ogbEuIQJDe/Gb0mbYVoBqGgFjofiE2lrZTO+RBVmUBQkAoybloOMUSfMawpOPTt/cIeNBq3M+t6gbTSl0ZVs5ofITWtonwhG8PNnlZwEmTLkC7evX1ImivMqo47ONxHXJlbbtjf+pL5kaqU5DrXWiv2L6Wt0xc11od4rbotnAQP2w2dqKTy2fj4ON6qCBp8i+t2FRi/iO0INJpI0aDjdkVCR",
"cert": "MIICUjCCAfigAwIBAgIRAJOBK8HG3E/Pmw8fZwL4iuswCgYIKoZIzj0EAwMwMTELMAkGA1UEBhMCVVMxFDASBgNVBAoTC0h5cGVybGVkZ2VyMQwwCgYDVQQDEwN0Y2EwHhcNMTcwNTI5MTEyNzQwWhcNMTcwODI3MTEyNzQwWjBFMQswCQYDVQQGEwJVUzEUMBIGA1UEChMLSHlwZXJsZWRnZXIxIDAeBgNVBAMTF1RyYW5zYWN0aW9uIENlcnRpZmljYXRlMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6QVLJ48eCVlS1S8/BiSTU1XiWR0tZ6NGF3OZr306sTcgG/nYtcjx6/yJNwDgdYz5Boi7sA2QWUcqUkWfIPNWPKOB3DCB2TAOBgNVHQ8BAf8EBAMCB4AwDAYDVR0TAQH/BAIwADANBgNVHQ4EBgQEAQIDBDAPBgNVHSMECDAGgAQBAgMEME0GBioDBAUGBwEB/wRArYyx9l4zJL4TbxDHuGZBsJ545Jsph/D/Q/FgMTTtxPh93B+LV6AI1tyFVHWiKNS4GgvDVlmgfwFuMAca+/PaujBKBgYqAwQFBggEQPEdAS1h/9LJJmqriV+42k0bL+ghGFbHa5GiEAitiMjlduiwgfelPK/rbAq0a6NrnPXCEYe1aWCSqyqsEfHGBoIwCgYIKoZIzj0EAwMDSAAwRQIhAOidYaESZ3xyZBTgcBOm3zyXvGb4YCCt7I7+M0gZF4xzAiAgYuCf7FPGx3fnJdABlZjszA1pR6jaPtIOQN2ndfAFZA==",
"signature": "MEUCIQCVBtfjk3yzwfOFyOojH5tynq3HrG7dFN9URXB5C6kYDAIgLPcwJBAIVlD1I4dxzczfxmywlZn1ZMSvL2djioWgqFQ="
}
],
"stateHash": "9KEsiBp4t/VZyETXMASSYtuPuf8JowktCSbX7daPt69uqDzrJvifrPIXpI5N1kOayoq6H0afM8zN/WZpWsesHQ==",
"previousBlockHash": "v6Fo6SARD0xdE0B/jvIq22kgV5uLAKhTwLjrA4YRBskWcZOjECFbNgzlwFQhEmbar1zcAbcZVo9eo/3tx2y68g==",
"consensusMetadata": "CCA=",
"nonHashData": {
"localLedgerCommitTimestamp": {
"seconds": 1496062125,
"nanos": 496018341
},
"chaincodeEvents": [
{
},
{
}
]
}
}
So I had performed a invoke to
transfer 10 from a to b.
And i got this payload.
The payload is encrypted as the
CORE_SECURITY_ENABLED=true and
CORE_SECURITY_PRIVACY=true
I know we have to use the certificate to decrypt the payload and then might be use base64 decoding to get the exact payload back.
But my question is what are the exact function calls or exact steps involved in doing so ?

Elasticsearch _reindex fails

I am working on AWS Elasticsearch. It doesn't allow open/close index, so setting change can not be applied on the index.
In order to change the setting of a index, I have to create a new index with new setting and then move the data from the old index into new one.
So first I created a new index with
PUT new_index
{
"settings": {
"max_result_window":3000000,
"analysis": {
"filter": {
"german_stop": {
"type": "stop",
"stopwords": "_german_"
},
"german_keywords": {
"type": "keyword_marker",
"keywords": ["whatever"]
},
"german_stemmer": {
"type": "stemmer",
"language": "light_german"
}
},
"analyzer": {
"my_german_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"german_stop",
"german_keywords",
"german_normalization",
"german_stemmer"
]
}
}
}
}
}
it succeeded. Then I try to move data from old index into new one with query:
POST _reindex
{
"source": {
"index": "old_index"
},
"dest": {
"index": "new_index"
}
}
It failed with
Request failed to get to the server (status code: 504)
I checked the indices with _cat api, it gives
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open old_index AGj8WN_RRvOwrajKhDrbPw 5 1 2256482 767034 7.8gb 7.8gb
yellow open new_index WnGZ3GsUSR-WLKggp7Brjg 5 1 52000 0 110.2mb 110.2mb
Seemingly some data are loaded into there, just wondering why the _reindex doesn't work.
You can check the status of reindex with api:
GET _tasks?detailed=true&actions=*reindex
There is a "status" object in response which has field "total":
total is the total number of operations that the reindex expects to perform. You can estimate the progress by adding the updated, created, and deleted fields. The request will finish when their sum is equal to the total field.
Link to ES Documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html