Apollo Sandbox Explorer not reachable when throwing an error inside "context" - apollo

Good day!
Why does throwing an error inside "context" destroy my Apollo Sandbox?
const server = new ApolloServer({
...
context: ({req}) => {
throw new AuthenticationError('you must be logged in!');
}
...
});
As a result, my queries and mutations are not being shown in the explorer.
Are there other configurations that I need to setup so that I can throw an error in "context"?
Thanks in advance.

Related

Errors connecting to AWS Keyspaces using a lambda layer

Intermittently getting the following error when connecting to an AWS keyspace using a lambda layer
All host(s) tried for query failed. First host tried, 3.248.244.53:9142: Host considered as DOWN. See innerErrors.
I am trying to query a table in a keyspace using a nodejs lambda function as follows:
import cassandra from 'cassandra-driver';
import fs from 'fs';
export default class AmazonKeyspace {
tpmsClient = null;
constructor () {
let auth = new cassandra.auth.PlainTextAuthProvider('cass-user-at-xxxxxxxxxx', 'zzzzzzzzz');
let sslOptions1 = {
ca: [ fs.readFileSync('/opt/utils/AmazonRootCA1.pem', 'utf-8')],
host: 'cassandra.eu-west-1.amazonaws.com',
rejectUnauthorized: true
};
this.tpmsClient = new cassandra.Client({
contactPoints: ['cassandra.eu-west-1.amazonaws.com'],
localDataCenter: 'eu-west-1',
authProvider: auth,
sslOptions: sslOptions1,
keyspace: 'tpms',
protocolOptions: { port: 9142 }
});
}
getOrganisation = async (orgKey) => {
const SQL = 'select * FROM organisation where organisation_id=?;';
return new Promise((resolve, reject) => {
this.tpmsClient.execute(SQL, [orgKey], {prepare: true}, (err, result) => {
if (!err?.message) resolve(result.rows);
else reject(err.message);
});
});
};
}
I am basically following this recommended AWS documentation.
https://docs.aws.amazon.com/keyspaces/latest/devguide/using_nodejs_driver.html
It seems that around 10-20% of the time the lambda function (cassandra driver) cannot connect to the endpoint.
I am pretty familiar with Cassandra (I already use a 6 node cluster that I manage) and don't have any issues with that.
Could this be a timeout or do I need more contact points?
Followed the recommended guides. Checked from the AWS console for any errors but none shown.
UPDATE:
Update to the above question....
I am occasionally (1 in 50 if I parallel call the function (5 concurrent calls)) getting the below error:
"All host(s) tried for query failed. First host tried,
3.248.244.5:9142: DriverError: Socket was closed at Connection.clearAndInvokePending
(/opt/node_modules/cassandra-driver/lib/connection.js:265:15) at
Connection.close
(/opt/node_modules/cassandra-driver/lib/connection.js:618:8) at
TLSSocket.
(/opt/node_modules/cassandra-driver/lib/connection.js:93:10) at
TLSSocket.emit (node:events:525:35)\n at node:net:313:12\n at
TCP.done (node:_tls_wrap:587:7) { info: 'Cassandra Driver Error',
isSocketError: true, coordinator: '3.248.244.5:9142'}
This exception may be caused by throttling in the keyspaces side, resulting the Driver Error that you are seeing sporadically.
I would suggest taking a look over this repo which should help you to put measures in place to either prevent the occurrence of this issue or at least reveal the true cause of the exception.
Some of the errors you see in the logs you will need to investigate Amazon CloudWatch metrics to see if you have throttling or system errors. I've built this AWS CloudFormation template to deploy a CloudWatch dashboard with all the appropriate metrics. This will provide better observability for your application.
A System Error indicates an event that must be resolved by AWS and often part of normal operations. Activities such as timeouts, server faults, or scaling activity could result in server errors. A User error indicates an event that can often be resolved by the user such as invalid query or exceeding a capacity quota. Amazon Keyspaces passes the System Error back as a Cassandra ServerError. In most cases this a transient error, in which case you can retry your request until it succeeds. Using the Cassandra driver’s default retry policy customers can also experience NoHostAvailableException or AllNodesFailedException or messages like yours "All host(s) tried for query failed". This is a client side exception that is thrown once all host in the load balancing policy’s query plan have attempted the request.
Take a look at this retry policy for NodeJs which should help resolve your "All hosts failed" exception or pass back the original exception.
The retry policies in the Cassandra drivers are pretty crude and will not be able to do more sophisticated things like circuit breaker patters. You may want to eventually use a "failfast" retry policy for the driver and handle the exceptions in your application code.

How to pass an aws IAM role to a Java client (through a lambda function)

I'm aiming to create a lambda function which it will execute a java client, such (the java client) is supposed to call an aws service endpoint.
Since my java client needs authentication (I am approaching this with the AWS4Signer library). I would like to authenticate my java code with the IMDS of my lambda exception role, as I can't use users due to a sec procedures.
I've been trying to use InstanceProfileCredentialsProvider as my credential provider
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-roles.html
which in theory it should take the IMDS of the instance.Not sure if it only when using an EC2 instance, or it could also work with any aws compute service, such as lambda is.
with InstanceProfileCredentialsProvider I'm getting the following error:
com.amazonaws.internal.InstanceMetadataServiceResourceFetcher - Token is not supported. Ignoring
Failed to connect to service endpoint: com.amazonaws.SdkClientException: Failed to connect to service endpoint:
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100)
I came across with the following blogs where a similar issue was resported
https://github.com/aws/aws-sdk-java/issues/2285
https://medium.com/expedia-group-tech/service-slow-to-retrieve-aws-credentials-ebc02a38e95b
and it seems this is happening due to the catched instance credentials is already outdated at the time to authenticate the credentials.
So I have added a logic to refresh the credentialsProvider object (InstanceProfileCredentialsProvider)
public static Optional<AWSCredentials> retrieveCredentials (AWSCredentialsProvider provider){
var attempts = 0;
System.out.println("Retrieving credentials...");
try {
System.out.printf("Retrieving credentials at attempt : %s", attempts);
return Optional.of(provider.getCredentials());
}catch(Exception e){
while(attempts < 15) {
try {
TimeUnit.SECONDS.sleep(30);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
System.out.printf("Retrieving credentials at attempt : %s", attempts);
provider.refresh();
try {
return Optional.of(provider.getCredentials());
}catch (Exception e1){
System.out.printf("Attempt : %s failed due to: %s", attempts, e1.getMessage());
}
attempts ++;
}
e.printStackTrace();
System.exit(1);
}
return Optional.empty();
}
```
But I'm still getting the same error.
Any kind of help will be very appreciated.

GCP cloud build VIEW RAW logs link

I have written a small cloud function in GCP which is subscribed to Pub/Sub event. When any cloud builds triggered function post message into the slack channel over webook.
In response, we get lots of details to trigger name, branch name, variables details but i am more interested in Build logs URL.
Currently getting build logs URL in response is like : logUrl: https://console.cloud.google.com/cloud-build/builds/899-08sdf-4412b-e3-bd52872?project=125205252525252
which requires GCP console access to check logs.
While in the console there an option View Raw. Is it possible to get that direct URL in the event response? so that i can directly sent it to slack and anyone can access direct logs without having GCP console access.
In your Cloud Build event message, you need to extract 2 values from the JSON message:
logsBucket
id
The raw file is stored here
<logsBucket>/log-<id>.txt
So, you can get it easily in your function with Cloud Storage client library (preferred solution) or with a simple HTTP Get call to the storage API.
If you need more guidance, let me know your dev language, I will send you a piece of code.
as #guillaume blaquiere helped.
Just sharing the piece of code used in cloud function to generate the singedURL of cloud build logs.
var filename ='log-' + build.id + '.txt';
var file = gcs.bucket(BUCKET_NAME).file(filename);
const getURL = async () => {
return new Promise((resolve, reject) => {
file.getSignedUrl({
action: 'read',
expires: Date.now() + 76000000
}, (err, url) => {
if (err) {
console.error(err);
reject(err);
}
console.log("URL");
resolve(url);
});
})
}
const singedUrl = await getURL();
if anyone looking for the whole code please follow this link : https://github.com/harsh4870/Cloud-build-slack-notification/blob/master/singedURL.js

Connection timeout sequelize on AWS Lambda on third party MYSQL

I have deployed my lambda function via Serverless framework. When i invoke the function locally it works fine. But on AWS Lambda environment it is unable to make connection to MYSQL which is hosted on remotemysql.com. It gives timeout error every time.
Tried to increase timeout but nothing works
sequelize = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
host: process.env.DB_HOST,
dialect: "mysql",
logging: false,
connectTimeout: 60000
}
);
sequelize
.authenticate()
.then(() => {
logger.info("Database connection established");
// do my work
// some api calls to xys hosts outside servers
})
.catch(error => {
logger.error("Database connection failed", {
code: error.original.code,
errno: error.original.errno
});
process.exit(1);
});
My function is not inside any VPC and it has internet access allowed as I verified it is returning API responses which I am making to some other services outside AWS.
I am not sure if it is because of TCP connection or something else.
Please advise
Ok, so I have figured out what was causing the issue. It was not because of the connectivity but something wrong with the async handler.
my handler was like
exports.handler = async () => { ... }
I removed async and now it is working fine.

Refresh of AWS.config.credentials

I'm trying to find a proper way of refreshing the AWS.config.credentials I get from Cognito. I'm using developer authorized identities and it works fine. I get the credentials and if I perform a refresh() the AWS.config.credentials.expireTime is also updated, as expected.
The credentials expire after one hour, so I thought I could use setTimeout to refresh the credentials and configure it based on the credentials.expireTime (I calculate the number of millis).
However, it seems like I have to perform the refresh much more often. The credentials keeps timing out before their time. The setTimeout-method works just fine if I reduce the delay to a much smaller amount, but I would prefer not to overdo the refresh.
Is this true, and if so how often do I need to do this? Having it refresh every 5 minutes or so seems excessive :/
Recurring refresh
function refreshAwsCredentials() {
clearTimeout(awsRenewalTimeout);
// perform credentials renewal
AwsService.refreshAwsCredentials()
.then( function () {
awsRenewalTimeout = setInterval(
function () {
refreshAwsCredentials();
}, AWS.config.credentials.expireTime.getTime() - new Date().getTime() - 300000
);
})
.catch( function (error) {
// checks error, normally it basically logs in, then refreshes
});
}
AwsService.refreshAwsCredentials()
if ( AWS.config.credentials.needsRefresh() ) {
AWS.config.credentials.refresh( function (error) {
if (error) {
// rejects promise with error message
}
else {
// resolves promise
}
});
}
I finally found out that I had a check for auth error where I was too specific. I checked for being logged in and not that the credentials had timed out. Now I attempt a login when things fail and the credential timeout is now resolved when needed.