When using Redis Caching (which implements HTTPCache and KeyValueCache), failed gets shouldn't block request - apollo

I find this hard to believe it seems that when implementing a Redis backend cache in Apollo Server, if Apollo Server/Redis Client is unable to connect to the server, the Apollo thread is blocked until it is able to connect.
I've been digging through Apollo's server source code and it seems it probably relies on the Redis client itself?
This may be some configuration in Ioredis but I'm not able to find the right combination. Hoping someone can help.
When Apollo does its fetch through their HTTPCache class, this is the problematic code:
async fetch(
request: Request,
options: {
cacheKey?: string;
cacheOptions?:
| CacheOptions
| ((response: Response, request: Request) => CacheOptions | undefined);
} = {},
): Promise<Response> {
const cacheKey = options.cacheKey ? options.cacheKey : request.url;
const entry = await this.keyValueCache.get(cacheKey);
The very first thing it does is a get. And if that never resolves, it never continues on.
And my semi-random attempt at trying to find the right configuration for ioredis:
const cluster = new Redis.Cluster(
[
{
host: config.redis.endpoint,
port: config.redis.port,
},
],
{
retryDelayOnTryAgain: 0,
retryDelayOnClusterDown: 0,
retryDelayOnFailover: 0,
slotsRefreshTimeout: 0,
clusterRetryStrategy: (times, reason) => {
// Function should return how long to wait before retrying to connect to redis.
const maxRetryDelay = 30000;
const delay = Math.min(times * 1000, maxRetryDelay);
logger.info(`Redis`, `Connection retry. Try number ${times}. Delay: ${delay}`);
if (reason) logger.error(reason.message);
return delay; // Steadily increase retry times until max which is defined above.
},
redisOptions: {
tls: {
rejectUnauthorized: false,
},
autoResendUnfulfilledCommands: false,
retryStrategy: () => {
return;
},
disconnectTimeout: 0,
reconnectOnError: () => false,
connectTimeout: 0,
commandTimeout: 0,
maxRetriesPerRequest: 0,
connectionName: 'Tank Dev',
username: config.redis.auth.username,
password: config.redis.auth.password,
},
slotsRefreshInterval: 60000,
},
);

Related

NEXTJS Amplify slow server response

I've a SSR App Nextjs 12 installed on AWS Amplify that's too slow.
Logging the getServerSideProps() this is the result:
It takes 9 seconds to load page but the code inside getServerSideProps takes less than 0.5 second.
This is the server log:
START RequestId: 94ced4e1-ec32-4409-8039-fdcd9b5f5894 Version: 300
2022-09-13T09:25:32.236Z 94ced4e1-ec32-4409-8039-fdcd9b5f5894 INFO 1 [09:25:32.236] -
2022-09-13T09:25:32.253Z 94ced4e1-ec32-4409-8039-fdcd9b5f5894 INFO 2 [09:25:32.253] -
2022-09-13T09:25:32.255Z 94ced4e1-ec32-4409-8039-fdcd9b5f5894 INFO 3 [09:25:32.255] -
2022-09-13T09:25:32.255Z 94ced4e1-ec32-4409-8039-fdcd9b5f5894 INFO 4 [09:25:32.255] -
2022-09-13T09:25:32.431Z 94ced4e1-ec32-4409-8039-fdcd9b5f5894 INFO 5 [09:25:32.431] -
2022-09-13T09:25:32.496Z 94ced4e1-ec32-4409-8039-fdcd9b5f5894 INFO 6 [09:25:32.496] -
END RequestId: 94ced4e1-ec32-4409-8039-fdcd9b5f5894
REPORT RequestId: 94ced4e1-ec32-4409-8039-fdcd9b5f5894 Duration: 9695.59 ms Billed Duration: 9696 ms Memory Size: 512 MB Max Memory Used: 206 MB
That's the code:
export async function getServerSideProps(context) {
console.log("1 [" + new Date().toISOString().substring(11, 23) + "] -");
let req = context.req;
console.log("2 [" + new Date().toISOString().substring(11, 23) + "] -");
const { Auth } = withSSRContext({ req });
console.log("3 [" + new Date().toISOString().substring(11, 23) + "] -");
try {
console.log("4 [" + new Date().toISOString().substring(11, 23) + "] -");
const user = await Auth.currentAuthenticatedUser();
console.log("5 [" + new Date().toISOString().substring(11, 23) + "] -");
const dict = await serverSideTranslations(context.locale, ["common", "dashboard", "footer", "hedgefund", "info", "etf", "fs"]);
console.log("6 [" + new Date().toISOString().substring(11, 23) + "] -");
return {
props: {
exchange: context.params.exchange,
ticker: context.params.ticker,
username: user.username,
attributes: user.attributes,
...dict,
},
};
} catch (err) {
return {
redirect: {
permanent: false,
destination: "/auth/signin",
},
props: {},
};
}
}
This is not the answer but rather an alternative.
I tried using Amplify for my implementation because getServerSideProps on the Vercel hobby account gives a function timeout error. However, I think the Next.js deployment to Amplify is not optimized yet.
Instead of using getServerSideProps, I used getStaticPaths and getStaticProps whereby I always limited the number of paths to fetch from my API.
On client side
export const getStaticPaths: GetStaticPaths = async () => {
// This route to my API only gets paths(IDs)
const res = await getFetcher("/sentences-paths");
let paths = [];
if (res.success && res.resource) {
paths = res.resource.map((sentence: any) => ({
params: { sentenceSlug: sentence._id },
}));
}
return { paths, fallback: "blocking" };
};
On API
const getSentencePaths = async (req, res) => {
const limit = 50;
Sentence.find(query)
.select("_id")
.limit(limit)
.exec()
.then((resource) => res.json({ success: true, resource }))
.catch((error) => res.json({ success: false, error }));
};
This means even if I have 100 000 sentences, only 50 are rendered at build. The rest of the sentences are generated on demand because we have fallback: "blocking". See docs
Here is how my getStaticProps looks like
export const getStaticProps: GetStaticProps = async ({ params }) => {
const sentenceSlug = params?.sentenceSlug;
const response = await getFetcher(`/sentences/${sentenceSlug}`);
let sentence = null;
if (response.success && response.resource) sentence = response.resource;
if (!sentence) {
return {
notFound: true,
};
}
return {
props: { sentence },
revalidate: 60,
};
};
As you can see above, I used revalidate: 60 seconds see docs but since you wanted to use getServerSideProps, that's not the perfect solution.
The perfect solution is On-Demand Revalidation. With this, whenever you make a change to data that's used in a page, for example, change the sentence content, you can trigger a webhook to regenerate your page created by getStaticProps. So, your page will always be updated.
Go through this youtube tutorial to implement on-demand revalidation, really comprehensive https://www.youtube.com/watch?v=Wh3P-sS1w0I&t=8s&ab_channel=TuomoKankaanp%C3%A4%C3%A4.
Next.js on Vercel works way faster and more efficiently. Hope I helped.

NOT_FOUND(5): Instance Unavailable. HTTP status code 404

I got this error when task is trying processing.
This is my nodejs code
async function quickstart(message : any) {
// TODO(developer): Uncomment these lines and replace with your values.
const project = "";//projectid
const queue = "";//queuename
const location = "";//region
const payload = JSON.stringify({
id: message.id,
data: message.data,
attributes: message.attributes,
});
const inSeconds = 180;
// Construct the fully qualified queue name.
const parent = client.queuePath(project, location, queue);
const task = {
appEngineHttpRequest: {
headers: {"Content-type": "application/json"},
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
relativeUri: "/api/download",
body: "",
},
scheduleTime: {},
};
if (payload) {
task.appEngineHttpRequest.body = Buffer.from(payload).toString("base64");
}
if (inSeconds) {
task.scheduleTime = {
seconds: inSeconds + Date.now() / 1000,
};
}
const request = {
parent: parent,
task: task,
};
console.log("Sending task:");
console.log(task);
// Send create task request.
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
console.log("Created task");
return true;
}
The task is created without issue. However, it didnt trigger my cloud function and I got 404 or unhandled exception in my cloud logs. I have no idea whats going wrong.
I also did test with gcloud cli without the issue. Gcloud cli able to trigger my cloud function based on provided url.

Why is transaction marked as invalid?

I am trying to create a transaction by sending 1 ether from one account to another. Currently, I'm running a local fully-synched parity node. It's running on the Volta test network of EWF (Energy Web Foundation). It's actually possible connecting to that node by Metamask and sending some Ether, but whenever I try that with the web3js which runs in a nodejs app, the parity node gives the following warning/output:
2019-10-25 00:56:50 jsonrpc-eventloop-0 TRACE own_tx Importing transaction: PendingTransaction { transaction: SignedTransaction { transaction: UnverifiedTransaction { unsigned: Transaction { nonce: 1, gas_price: 60000000000, gas: 21000, action: Call(0x2fa24fee2643d577d2859e90bc6d9df0e952034c), value: 1000000000000000000, data: [] }, v: 37, r: 44380982720866416681786190614497685349697533052419104293476368619223783280954, s: 3058706309566473993642661190954381582008760336148306221085742371338506386965, hash: 0x31b4f889f5f10e08b9f10c87f953c9dfded5d0ed1983815c3b1b837700f43702 }, sender: 0x0b9b5323069e9f9fb438e89196b121f3d40fd56e, public: Some(0xa3fc6a484716b844f18cef0039444a3188a295811f133324288cb963f3e5a21dd6ee19c91e42fa745b45a3cf876ff04e0fd1d060ccfe1dab9b1d608dda4c3733) }, condition: None }
2019-10-25 00:56:50 jsonrpc-eventloop-0 DEBUG own_tx Imported to the pool (hash 0x31b4f889f5f10e08b9f10c87f953c9dfded5d0ed1983815c3b1b837700f43702)
2019-10-25 00:56:50 jsonrpc-eventloop-0 WARN own_tx Transaction marked invalid (hash 0x31b4f889f5f10e08b9f10c87f953c9dfded5d0ed1983815c3b1b837700f43702)
When I checked the balance of the accounts, nothing has happened. I've tried to increase gasPrice, adding a 'from' key/value pair to txObject, etc. I have also started parity node with --no-persistent-txqueue so that it doesn't cache too many of transactions, as it's suggested here. But that didn't change anything, either. So I still get the same error and transaction doesn't go through. What is causing this problem and how can I solve it?
web3.eth.getTransactionCount(from, (err, txCount) => {
const txObject = {
nonce: web3.utils.toHex(txCount),
from: from,
to: to,
value: web3.utils.toHex(web3.utils.toWei(val, 'ether')),
gas: web3.utils.toHex(21000),
gasPrice: web3.utils.toHex(web3.utils.toWei('60', 'gwei'))
}
// Sign the transaction
const tx = new Tx(txObject);
tx.sign(pk);
const serializedTx = tx.serialize();
const raw = '0x' + serializedTx.toString('hex');
// Broadchast the transaction to the network
web3.eth.sendSignedTransaction(raw, (err, txHash) => {
if (txHash === undefined) {
res.render('sendTransaction', {
txSuccess: 0,
blockHash: 'Hash Undefined'
});
res.end();;
} else {
res.render('sendTransaction', {
txSuccess: 1,
blockHash: txHash,
});
res.end();;
}
});
});
Any suggestion is welcome,
Thanks!
The code looks fine so it must be something small, you're very close!
What does the error object contain in web3.eth. sendSignedTransaction callback?
Check your nonce to make sure it has not been used yet. https://volta-explorer.energyweb.org/address/0x0b9b5323069e9f9fb438e89196b121f3d40fd56e/transactions shows a few transactions from the sender address, with "4" being next value for nonce
Try generating the raw transaction via JavaScript and broadcasting it manually via command line

How to fix aws-iot-device-sdk disconnect behaviour after switching internet connection?

I am trying to setup aws-iot-device-sdk-js with reconnect behaviour after wifi is switched and its taking around 20 mins to do so.I am not sure where i am wrong the docs doesn't have anything regarding the issue i am having as well.
I have tried going through the package docs and tried changing the keepalive time but its still showing the same output the offine is called only after 20 mins and reconnects.
const awsIot = require("aws-iot-device-sdk").device;
const certs = require("./certs_config");
const device = awsIot({
keyPath: certs.KEYPATH,
certPath: certs.CERTPATH,
caPath: certs.CAPATH,
deviceId: "rt.bottle.com.np",
host: "aot2wgmcbqwsa-ats.iot.ap-south-1.amazonaws.com",
region: "ap-south-1",
keepalive: 60
});
let delay = 4000;
let count = 0;
const minimumDelay = 250;
if ((Math.max(delay, minimumDelay)) !== delay) {
console.log('substituting ' + minimumDelay + 'ms delay for ' + delay + 'ms...');
}
setInterval(function () {
count++;
device.publish('topic', JSON.stringify({
count
}));
}, Math.max(delay, minimumDelay)); // clip to minimum
device
.on('connect', function () {
console.log('connect');
});
device
.on('close', function () {
console.log('close');
});
device
.on('reconnect', function () {
console.log('reconnect');
});
device
.on('offline', function () {
console.log('offline');
});
device
.on('error', function (error) {
console.log('error', error);
});
device
.on('message', function (topic, payload) {
console.log('message', topic, payload.toString());
});
In aws console i am getting this message:
Mqtt connection lost. Reconnect. Error code: 4. AMQJS0004E Ping timed out.
after around 1.5 mins of the network switch but in the node server setup as you can see in the code below it only receives the offline message in around 20 mins. I want to get the error/offline/disconnect as soon as its disconnects or goes offline.(i.e when receives the error on aws console) as expected.
i am currently using the simulateNetworkFailure Function to handle the network switch issue i was having i hope it helps others having the same issue.

AWS javascript SDK request.js send request function execution time gradually increases

I am using aws-sdk to push data to Kinesis stream.
I am using PutRecord to achieve realtime data push.
I am observing same delay in putRecords as well in case of batch write.
I have tried out this with 4 records where I am not crossing any shard limit.
Below is my node js http agent configurations. Default maxSocket value is set to infinity.
Agent {
domain: null,
_events: { free: [Function] },
_eventsCount: 1,
_maxListeners: undefined,
defaultPort: 80,
protocol: 'http:',
options: { path: null },
requests: {},
sockets: {},
freeSockets: {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256 }
Below is my code.
I am using following code to trigger putRecord call
event.Records.forEach(function(record) {
var payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
// put record request
evt = transformEvent(payload );
promises.push(writeRecordToKinesis(kinesis, streamName, evt ));
}
Event structure is
evt = {
Data: new Buffer(JSON.stringify(payload)),
PartitionKey: payload.PartitionKey,
StreamName: streamName,
SequenceNumberForOrdering: dateInMillis.toString()
};
This event is used in put request.
function writeRecordToKinesis(kinesis, streamName, evt ) {
console.time('WRITE_TO_KINESIS_EXECUTION_TIME');
var deferred = Q.defer();
try {
kinesis.putRecord(evt , function(err, data) {
if (err) {
console.warn('Kinesis putRecord %j', err);
deferred.reject(err);
} else {
console.log(data);
deferred.resolve(data);
}
console.timeEnd('WRITE_TO_KINESIS_EXECUTION_TIME');
});
} catch (e) {
console.error('Error occured while writing data to Kinesis' + e);
deferred.reject(e);
}
return deferred.promise;
}
Below is output for 3 messages.
WRITE_TO_KINESIS_EXECUTION_TIME: 2026ms
WRITE_TO_KINESIS_EXECUTION_TIME: 2971ms
WRITE_TO_KINESIS_EXECUTION_TIME: 3458ms
Here we can see gradual increase in response time and function execution time.
I have added counters in aws-sdk request.js class. I can see same pattern in there as well.
Below is code snippet for aws-sdk request.js class which executes put request.
send: function send(callback) {
console.time('SEND_REQUEST_TO_KINESIS_EXECUTION_TIME');
if (callback) {
this.on('complete', function (resp) {
console.timeEnd('SEND_REQUEST_TO_KINESIS_EXECUTION_TIME');
callback.call(resp, resp.error, resp.data);
});
}
this.runTo();
return this.response;
},
Output for send request:
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 1751ms
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 1816ms
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 2761ms
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 3248ms
Here you can see it is increasing gradually.
Can anyone please suggest how can I reduce this delay?
3 seconds to push single record to Kinesis is not at all acceptable.