This might sound a little odd, but I'm facing a situation where I have a micro-service that assembles some pricing logic, but for that, it needs a bunch of information that another micro-service provides.
I believe I have two options: (1) grab all the data I need from the database and ignore the GraphQL work that was done in this other micro-service or (2) somehow hit this other micro-service from within my current service and get the data I need.
How would someone accomplish (2)?
I have no clear path of how to get that done without creating a mess.
I imagine that turning my pricing micro-service into a small client could work, but I'm just not sure if that's bad practice.
After much consideration and reading the answers I got here, I decided to turn my micro-service into a mini-client by using apollo-client.
In short, I have something like this:
import { ApolloClient } from 'apollo-client';
import { InMemoryCache } from 'apollo-cache-inmemory';
import { HttpLink } from 'apollo-link-http';
// Instantiate required constructor fields
const cache = new InMemoryCache();
const link = new HttpLink({
uri: 'http://localhost:3000/graphql',
});
const client = new ApolloClient({
// Provide required constructor fields
cache: cache,
link: link,
});
export default client;
That HttpLink is the federated schema, so I can call it from my resolver or anywhere else like this:
const query = gql`
query {
something(uid: "${uid}") {
field1
field2
field3
anotherThing {
field1
field2
}
}
}
`;
const response = await dataSources.client.query({query});
Related
Sorry I am new to blockchain development, so pardon my silly basic question.
I have created a bunch of ERC-1155 using Polygon main net. I have all the address and ids of the token. Now I want to transfer them to other users from my backend(nodejs) api.
What I have tried till now:
Used opensea-js with following code but getting alchemy error.
import { OpenSeaPort, Network } from 'opensea-js'
import Web3 from "web3"
// This example provider won't let you make transactions, only read-only calls:
const provider = new Web3.providers.HttpProvider('https://polygon-mainnet.g.alchemy.com/v2/**************************')
const seaport = new OpenSeaPort(provider, {
networkName: Network.Main,
})
const transactionHash = await seaport.transfer({
asset: {
tokenId: '**************************************',
tokenAddress:'**********************************',
schemaName: "ERC1155"
},
fromAddress: '***********************************', // Must own the asset
toAddress: '*************************************',
quantity: 1,
})
console.log(transactionHash);
Its giving error "Unsupported method: eth_sendTransaction"
I then searched about this error and alchemy has some complex solution for this. But I believe this is a very simple task and there must be a simpler solution which I could not find.
Let's say I have an "Banner" Table.
There are 2 possible use cases for this table.
1.get all banner data from table
My lambda function might like below:
'use strict'
const AWS = require('aws-sdk');
exports.handler = async function (event, context, callback) {
const documentClient = new AWS.DynamoDB.DocumentClient();
let responseBody = "";
let statusCode = 0;
const params = {
TableName : "Banner",
};
try{
const data = await documentClient.scan(params).promise();
responseBody = JSON.stringify(data.Items);
statusCode = 200
}catch(err){
responseBody = `Unabel to get products: ${err}`;
statusCode = 403
}
const response = {
statusCode: statusCode,
headers:{
"Content-Type": "application/json",
'Access-Control-Allow-Origin': '*', // Required for CORS support to work
},
body: responseBody
}
return response
}
2.Query by user partition key/GSI
I may need to query based on banner id or banner title to get the corresponding table.
At first, I was thinking combine this two user case in one single lambda function.
until I opened below post.
aws - how to set lambda function to make dynamic query to dynamodb
One of the comments provide a way for me to do the dynamic query for these 2 user case, but he/she also mention that:
you are giving anyone invoke the request the ability to put any query in the request, that might put you vulnerable to some type of SQL Injection attacks.
This makes me thinking whether I should separate these 2 user cases in two lambda function?
What is the general practise for these kinds of things?
Generally speaking, also if the "SQL Injection" can be blocked it's good to separate this into 2 functions, Lambda handler should be single responsibility. If you want to reuse the code you can create some common DAL that you can create with the common code.
I think this comes down to personal preference, but I'd recommend splitting the functionality into two lambdas.
It sounds like you have two access patterns:
Get all banners
Get banners by user
I would probably implement these with two separate lambdas. If I were exposing this functionality via an API, I'd probably create two endpoints:
GET /banners. (fetches all banners)
GET /users/[user_id]/banners. (fetches all banners for a given user)
Each of these endpoints would route to their own lambda that services the specific request. If you serve the request with a single lambda, you'll have to introduce logic within your lambdas to determine which type of request you're fulfilling. I can't imagine what you'd gain by using only one lambda.
Keep your lambda code focused on a single responsibility, it'll make it easier to develop, test, and debug.
As u already know the Local resolvers are deprecated so we can't use it as a perspective way to handling REST cache. What we should use instead of resolvers?
'field policies' are not good for that at all. Let's imagine... You have two different client queries: getBooks and getBook. Each query getting data from the rest API. Somehow we need to handle the situation when we already got the data from getBooks and runing another query getBook. getBook should not make a request because the data were already cached. We did that in resolvers before it was deprecated. We were just checking the cache and return the data if it already exists in the cache if not did a request. How we can handle this in current circumstances?
Sorry but it's a bit not what I meant. Here is a code example:
export const getBooks = gql`
query getBooks () {
getBooks ()
#rest(
type: "Book"
path: "books"
endpoint: "v1"
) {
id
title
author
}
}
`
export const getBook = gql`
query getBook ($id: Int!) {
getBook (id: $id)
#rest(
type: "Book"
path: "book/{args.id}"
endpoint: "v1"
) {
id
title
author
}
}
`
So we have two different queries. The goal is when we run both in turn the getBook should not make a REST request because we already have the same data in the cache since we get it from getBooks. Before resolvers were deprecated we handle it in resolvers. Like: if this ID is not exist in the cache just make a request if exist give me data from the cache. How we can do that now?
As u can see fetchPolicy it's completely different.
Local fields it's also not good because it's something about fields not about the whole entity.
i'm new to all the hot graphql/apollo stuff.
I have a subscription which gets a search result:
export const SEARCH_RESULTS_SUBSCRIPTION = gql`
subscription onSearchResultsRetrieved($sid: String!) {
searchResultsRetrieved(sid: $sid) {
status
clusteredOffers {
id
}
}
}
`;
Is it possible to query the "status" field from client cache if i need it inside another component? Or do i have to use an additional ?
In the apollo dev-tools i can see that there is a cache entry under "ROOT_SUBSCRIPTION" not "ROOT_QUERY". What does that mean?
....thanks
I found out that subscribeToMore is my friend to solve this.
At first i wrote a normal query for the data i want to subscribe to have cached data, then the cache will be updated by the subscription.
<3 apollo
I'm developing some business administration software in JS and I find myself in need of ACID transactions with DynamoDb. Lucky of mine, AWS just released the transactGet and transactWrite APIs that covers just the right use case!
I'm using the AWS.DynamoDb.DocumentClient object to make my other calls and it seems like the transact operations are not exposed for me to use.
I hacked inside the aws-sdk code and found the interface exports like such ("aws-sdk": "^2.368.0", document_client.d.js, line 2076):
export interface TransactWriteItem {
/**
* A request to perform a check item operation.
*/
ConditionCheck?: ConditionCheck;
/**
* A request to perform a PutItem operation.
*/
Put?: Put;
/**
* A request to perform a DeleteItem operation.
*/
Delete?: Delete;
/**
* A request to perform an UpdateItem operation.
*/
Update?: Update;
}
However, whenever I try to call the client with this methods, I get back a Type error, namely:
TypeError: dynamoDb[action] is not a function
Locally, I can just hack the sdk and expose everything, but it is not accepted on my deployment environment.
How should I proceed?
Thank you very much!
Edit:
If it is worth anything, there is the code I'm using to do the calls:
dynamo-lib.js:
import AWS from "aws-sdk";
export function call(action, params) {
const dynamoDb = new AWS.DynamoDB.DocumentClient();
return dynamoDb[action](params).promise();
}
Lambda code:
import * as dynamoDbLib from '../libs/dynamodb-lib';
import { success, failure } from '../libs/response-lib';
export default async function main(params, callback) {
try {
const result = await dynamoDbLib.call("transactWrite", params);
callback(null, success(result));
} catch (e) {
console.log(e);
callback(null, failure({"status": "Internal server error"}));
}
}
Edit 2:
Seems like the document client really does not export the method.
AWS.DynamoDB.DocumentClient = AWS.util.inherit({
/**
* #api private
*/
operations: {
batchGetItem: 'batchGet',
batchWriteItem: 'batchWrite',
putItem: 'put',
getItem: 'get',
deleteItem: 'delete',
updateItem: 'update',
scan: 'scan',
query: 'query'
},
...
As mentioned in accepted answer, the workaround is to use the transactWriteItems method straight from the DynamoDB object.
Thanks for the help! =D
Cheers!
UPDATE:
The issue has been resolved
Github Thread
AWS SDK is not currently supporting transactions in dynamodb document client i have raised the issue on github a current workaround is just not to use document client
let aws=require("aws-sdk");
aws.config.update(
{
region:"us-east-1",
endpoint: "dynamodb.us-east-1.amazonaws.com"
}
);
var dynamodb = new aws.DynamoDB();
dynamodb.transactWriteItems();
Also make sure you are using SDK v2.365.0 plus
Update the aws-sdk package (>2.365)
Use
var dynamodb = new aws.DynamoDB();
instead of
var dynamodb = new aws.DynamoDB.DocumentClient();