How to send multiple files to aws from react native in one row? - amazon-web-services

I use react native with graphQL.
I want to be able to upload multiple photos in one post in my app.
What I made is this.
As you can see, file (where uploaded files is stored) has one string.
This one row means one post in my app.
And my prisma schema is also set as String in file column.
model Photo {
id Int #id #default(autoincrement())
user User #relation(fields: [userId], references: [id])
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
*file String
}
But I want to upload multiple photos in one post which means one row in DB.
So my idea is to make my DB looks like below.
File column is made to be Array of String in one line.
In order to make this, I make my schema's file String[].
model Photo {
id Int #id #default(autoincrement())
user User #relation(fields: [userId], references: [id])
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
*file String[]
}
And I send Upload data from Front as below using map.
const onValid = async ({ caption }) => {
const file = await Promise.all(
selectPhoto.map(
(sp, index) =>
new ReactNativeFile({
uri: sp,
name: `${index}.jpg`,
type: "image/jpeg",
})
)
);
await uploadPhotoMutation({
variables: {
caption,
file,
},
});
};
Then below Array data goes to backend.
Array [
ReactNativeFile {
"name": "0.jpg",
"type": "image/jpeg",
"uri": "file:///storage/emulated/0/DCIM/Screenshots/Screenshot_20220223-011625_KakaoTalk.jpg",
},
ReactNativeFile {
"name": "1.jpg",
"type": "image/jpeg",
"uri": "file:///storage/emulated/0/DCIM/Camera/20220222_161411.jpg",
},
]
Then I upload this data to aws in order to get Url.
export const uploadSingleFileToS3 = async (file, userId, folderName) => {
const { filename, createReadStream } = await file;
const readStream = createReadStream();
const objectName = `${folderName}/${userId}-${Date.now()}-${filename}`;
const { Location } = new AWS.S3()
.upload({
Bucket: "chungchunonuploads",
Key: objectName,
ACL: "public-read",
Body: readStream,
})
.promise();
return Location;
};
export const uploadFileToS3 = async (filesToUpload, userId, folderName) => {
const uploadPromises = filesToUpload.map((file) => {
uploadSingleFileToS3(file, userId, folderName);
});
return Promise.all(uploadPromises);
};
However, I can't even check if my code is right or not.
Because I keep facing this error.
I think it's not just Network issue.
Because all other functions of app is working well.
So I guess some error of backend cause this.
even I can't figure out any little clue from this error message ^^;
I need many opinions from many people.
If you have any advise please tell me..

Related

Next.js SSR with AWS Amplify Why is User Needed to View Data?

I'm working with Next.js Server Side Rendering and AWS Amplify to get data. However, I've come to a roadblock, where I'm getting an error saying that there's no current user.
My question is why does the app need to have a user if the data is supposed to be read for the public?
What I'm trying to do is show data for the public, if they go to a user's profile page. They don't have to be signed into the app.
My current folder structure is:
/pages/[user]/index.js with getStaticProps and getStaticPaths:
export async function getStaticPaths() {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({ query: listUsers });
const paths = data.listUsers.items.map((user) => ({
params: { user: user.username },
}));
return {
fallback: true,
paths,
};
}
export async function getStaticProps({ params }) {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({
query: postsByUsername,
variables: {
username: params.username,
},
});
return {
props: {
posts: data.postsByUsername.items,
},
};
}
Finally figured it out. A lot of tutorials uses authMode: 'AMAZON_COGNITO_USER_POOLS ' // or AWS_IAM parameter in their graphql query for example in https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/
// Creating a post is restricted to IAM
const createdTodo = await API.graphql({
query: queries.createTodo,
variables: {input: todoDetails},
authMode: 'AWS_IAM'
});
But you rarely come across people who use authMode: API_KEY.
So I guess, if you want the public to read without authentication, you would just need to set authMode: 'API_KEY'...
Make sure you configure your amplify API to have public key as well.

using useQuery apollo client to get data by id not working

Intention:
trying to query from apollo client based on dynamic id. Have successfully checked in server provided interface which is working... and trying to do same from the client.
From the doc it looks like i need to use variables which i did.
Problem:
query using variables looks good but i am getting undefined in client.
Query which is working in graphql API:
query abc {
getCategoryProduct(id:"NzI1NDc1MTM1") {
id
title
description
favorited
published
price_per_day
price_per_week
price_per_month
price_per_weekend
picture
pictures {
id
url
}
createdAt
updatedAt
}
}
Problematic code in client
const GETDETAILS = gql`
query abc($id: String!) {
getCategoryProduct(id: $id) {
id
title
description
favorited
published
price_per_day
price_per_week
price_per_month
price_per_weekend
picture
pictures {
id
url
}
createdAt
updatedAt
}
}
`;
const DetailScreen = () => {
const { loading, error, data } = useQuery(GETDETAILS, {
variables: { id: "NzI1NDc1MTM1" },
});
useEffect(() => {
if (loading == false) {
console.log("=====data=====", data); // DATA IS EMPTY DO NOT NOT WHY??
}
}, [data]);
}
I was getting the same bug, and it looked like I had tried everything to solve it, including following the instruction in useQuery returns undefined, But returns data on gql playground, but it still didn't work.
Later, I change the variable name—in your case $id—to something else, so it's different from the name in typeDefs (getCategoryProduct(id:ID)), and it now works for me 🤨🙏.

Protect Strapi uploads folder via S3 SignedUrl

uploading files from strapi to s3 works fine.
I am trying to secure the files by using signed urls:
var params = {Bucket:process.env.AWS_BUCKET, Key: `${path}${file.hash}${file.ext}`, Expires: 3000};
var secretUrl = ''
S3.getSignedUrl('getObject', params, function (err, url) {
console.log('Signed URL: ' + url);
secretUrl = url
});
S3.upload(
{
Key: `${path}${file.hash}${file.ext}`,
Body: Buffer.from(file.buffer, 'binary'),
//ACL: 'public-read',
ContentType: file.mime,
...customParams,
},
(err, data) => {
if (err) {
return reject(err);
}
// set the bucket file url
//file.url = data.Location;
file.url = secretUrl;
console.log('FIle URL: ' + file.url);
resolve();
}
);
file.url (secretUrl) contains the correct URL which i can use in browser to retrieve the file.
But whenever reading the file form strapi admin panel no file nor tumbnail is shown.
I figured out that strapi adds a parameter to the file e.g ?2304.4005 which corrupts the get of the file to AWS. Where and how do I change that behaviour
Help is appreciated
Here is my solution to create a signed URL to secure your assets. The URL will be valid for a certain amount of time.
Create a collection type with a media field, which you want to secure. In my example the collection type is called invoice and the media field is called document.
Create an S3 bucket
Install and configure strapi-provider-upload-aws-s3 and AWS SDK for JavaScript
Customize the Strapi controller for your invoice endpoint (in this exmaple I use the core controller findOne)
const { sanitizeEntity } = require('strapi-utils');
var S3 = require('aws-sdk/clients/s3');
module.exports = {
async findOne(ctx) {
const { id } = ctx.params;
const entity = await strapi.services.invoice.findOne({ id });
// key is hashed name + file extension of your entity
const key = entity.document.hash + entity.document.ext;
// create signed url
const s3 = new S3({
endpoint: 's3.eu-central-1.amazonaws.com', // s3.region.amazonaws.com
accessKeyId: '...', // your accessKeyId
secretAccessKey: '...', // your secretAccessKey
Bucket: '...', // your bucket name
signatureVersion: 'v4',
region: 'eu-central-1' // your region
});
var params = {
Bucket:'', // your bucket name
Key: key,
Expires: 20 // expires in 20 seconds
};
var url = s3.getSignedUrl('getObject', params);
entity.document.url = url // overwrite the url with signed url
return sanitizeEntity(entity, { model: strapi.models.invoice });
},
};
It seems like although overwriting controllers and lifecycle of the collection models and strapi-plugin-content-manager to take into account the S3 signed urls, one of the Strapi UI components adds a strange hook/refs ?123.123 to the actual url that is received from the backend, resulting in the following error from AWS There were headers present in the request which were not signed when trying to see images from the CMS UI.
Screenshot with the faulty component
After digging the code & node_modules used by Strapi, it seems like you will find the following within strapi-plugin-upload/admin/src/components/CardPreview/index.js
return (
<Wrapper>
{isVideo ? (
<VideoPreview src={url} previewUrl={previewUrl} hasIcon={hasIcon} />
) : (
// Adding performance.now forces the browser no to cache the img
// https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
<Image src={`${url}${withFileCaching ? `?${cacheRef.current}` : ''}`} />
)}
</Wrapper>
);
};
CardPreview.defaultProps = {
extension: null,
hasError: false,
hasIcon: false,
previewUrl: null,
url: null,
type: '',
withFileCaching: true,
};
The default is set to true for withFileCaching, which therefore appends the const cacheRef = useRef(performance.now()); query param to the url for avoiding browser caches.
By setting it to false, or leaving just <Image src={url} /> should solve the issue of the extra query param and allow you to use S3 signed URLs previews also from Strapi UI.
This would also translate to use the docs https://strapi.io/documentation/developer-docs/latest/development/plugin-customization.html to customize the module strapi-plugin-upload in your /extensions/strapi-plugin-upload/...

How to resolve custom nested graphql query with Apollo CacheRedirects

We are using apollo-client in a react project. We made a cursor level on top of any list queries. For example:
query MediaList($mediaIds: [ID!], $type: [MediaType!], $userId: ID!) {
user {
id
medias_cursor(all_medias: true, active: true, ids: $mediaIds) {
medias {
id
type
category
name
}
}
}
}
Now for different MediaList query, the Media Objects might already exist in cache but we can not use it to skip network query. For example:
After we query medias_cursor({"all_medias":true,"active":true,"ids":["361","362","363"]}),
we've already got the three Media objects here - (Media:361, Media:362, Media:363).
So when we try to query medias_cursor({"all_medias":true,"active":true,"ids":["361","363"]}, we should have everything we need in the cache already. But right now, the apollo default behavior will just pass the cache and hit the network.
We tried to add a cacheRedirects config to solve this problem like this:
const cache = new InMemoryCache({
cacheRedirects: {
User: {
medias_cursor: (_, { ids }, { getCacheKey }) => {
if (!ids) return undefined
return {
medias: map(ids, id => {
return getCacheKey({ __typename: 'Media', id: id })
})
}
},
},
},
})
We are expecting that the cacheRedirects would help us to use the cache when it's available, but now it will skip the cache anyway.

what api template can I get the correct JSON object to store in the Dynamodb

I'm currently doing a project about storing http post urlencoded data from a device and store in Dynamodb. But I can't get the correct object format in the Dynamodb, all I can get is like this
and my lambda function to pass the data is like this:
exports.handler = function(event, context,callback) {
var input = querystring.parse(event.body);
var inputttt=input.data;
var params={
Item:{
date:Date.now(),
message:event.body,
ID:inputttt,
a:{"id":"123456","data":[{"mac":"1231"}]}
},
TableName:'wifi'
};
Also, my API using the application/x-www-form-urlencoded and the template is
{
"body": $input.json('$')
}
What I need in the Dynamodb is something like a standard JSON object like this
I can't change anything in the client device, all I can change is the uploading URL which is the API endpoint.
You don't need to provide the types if you use the DocumentClient API instead of the DynamoDB API.
DocumentClient is an API that abstracts types away when manipulating items into DynamoDB, making it way easier to read and write items from it.
Assuming you're invoking DynamoDB.putItem(params) at some point, you'll need to replace it with DocumentClient's API and use its put method instead.
Your code will then be as simple as:
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event) => {
await docClient.put({
Item: {
date: Date.now(),
message: JSON.parse(event.body),
ID: 'some-random-id-you-choose',
a: { "id": "123456", "data": [{ "mac": "1231" }] }
},
TableName: 'wifi'
}).promise()
}
See that I'm using async/await, so you don't need to use Lambda's callback nor DynamoDB's callbacks anymore.
All API operations for DocumentClient are available on the official docs
While DynamoDB is a schema-less document store, it does require that you declare the type of data that is stored in the fields of the item.
Your code should look like this:
const aws = require('aws-sdk');
const ddb = new aws.DynamoDB();
const Item = {
date: {N: Date.now()},
message: {S: event.body},
ID: {S: inputttt},
a: {M: {
"id":{S: "123456"},
"data":{L: [ {M: {"mac":{S: "1231"}}} ]}
}}
};
const TableName = 'wifi';
ddb.putItem({Item, TableName}, (err, data) => { ... })
In the code above, every property of Item is an object, mapping a type to a value. For example, date is a number type, with {N: Date.now}; a is an object, or a map, with {M: {"id" ... }}, and id is a string, with {S: '123456'}.
The code above makes some assumptions about the types. You should make sure that the types I chose are correct for your data. (e.g., assumed event.body and inputttt are strings.)