Cannot find module '#google-cloud/storage' - google-cloud-platform

I am using the GCP console on my browser. I have created a function as following:
function listFiles(bucketName) {
// [START storage_list_files]
// Imports the Google Cloud client library
const Storage = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
storage
.bucket(bucketName)
.getFiles()
.then(results => {
const files = results[0];
console.log('Files:');
files.forEach(file => {
console.log(file.name);
});
})
.catch(err => {
console.error('ERROR:', err);
});
// [END storage_list_files]
}
exports.helloWorld = function helloWorld (req, res) {
if (req.body.message === undefined) {
// This is an error case, as "message" is required
res.status(400).send('No message defined!');
}
else {
// Everything is ok
console.log(req.body.lat);
console.log(req.body.lon);
listFiles("drive-test-demo");
res.status(200).end();
}
}
Literally all I am trying to do right now is list the files inside a bucket, if a certain HTTPS trigger comes through.
my package.json file is as follows:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"#google-cloud/storage": "1.5.1"
}
}
and I am getting the error "Cannot find module '#google-cloud/storage'"
Most queries I have seen thus far have been resolved by using npm install, but I don't know how to do that considering that my index.js and package.json files are stored in a zip file inside a gcloud bucket. Any advice on how to solve this would be much apreciated.

Open console, change dir to you functions project and type:
npm install --save #google-cloud/storage
That's all!

Related

Uppy Companion doesn't work for > 5GB files with Multipart S3 uploads

Our app allow our clients large file uploads. Files are stored on AWS/S3 and we use Uppy for the upload, and dockerize it to be used under a kubernetes deployment where we can up the number of instances.
It works well, but we noticed all > 5GB uploads fail. I know uppy has a plugin for AWS multipart uploads, but even when installed during the container image creation, the result is the same.
Here's our Dockerfile. Has someone ever succeeded in uploading > 5GB files to S3 via uppy? IS there anything we're missing?
FROM node:alpine AS companion
RUN yarn global add #uppy/companion#3.0.1
RUN yarn global add #uppy/aws-s3-multipart
ARG UPPY_COMPANION_DOMAIN=[...redacted..]
ARG UPPY_AWS_BUCKET=[...redacted..]
ENV COMPANION_SECRET=[...redacted..]
ENV COMPANION_PREAUTH_SECRET=[...redacted..]
ENV COMPANION_DOMAIN=${UPPY_COMPANION_DOMAIN}
ENV COMPANION_PROTOCOL="https"
ENV COMPANION_DATADIR="COMPANION_DATA"
# ENV COMPANION_HIDE_WELCOME="true"
# ENV COMPANION_HIDE_METRICS="true"
ENV COMPANION_CLIENT_ORIGINS=[...redacted..]
ENV COMPANION_AWS_KEY=[...redacted..]
ENV COMPANION_AWS_SECRET=[...redacted..]
ENV COMPANION_AWS_BUCKET=${UPPY_AWS_BUCKET}
ENV COMPANION_AWS_REGION="us-east-2"
ENV COMPANION_AWS_USE_ACCELERATE_ENDPOINT="true"
ENV COMPANION_AWS_EXPIRES="3600"
ENV COMPANION_AWS_ACL="public-read"
# We don't need to store data for just S3 uploads, but Uppy throws unless this dir exists.
RUN mkdir COMPANION_DATA
CMD ["companion"]
EXPOSE 3020
EDIT:
I made sure I had:
uppy.use(AwsS3Multipart, {
limit: 5,
companionUrl: '<our uppy url',
})
And it still doesn't work- I see all the chunks of the 9GB file sent on the network tab but as soon as it hits 100% -- uppy throws an error "cannot post" (to our S3 url) and that's it. failure.
Has anyone ever encountered this? upload goes fine till 100%, then the last chunk gets HTTP error 413, making the entire upload fail.
Thanks!
Here I'm adding some code samples from my repository that will help you to understand the flow of using the BUSBOY package to stream the data to the S3 bucket. Also, I'm adding the reference links here for you to get the package details I'm using.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/index.html
https://www.npmjs.com/package/busboy
export const uploadStreamFile = async (req: Request, res: Response) => {
const busboy = new Busboy({ headers: req.headers });
const streamResponse = await busboyStream(busboy, req);
const uploadResponse = await s3FileUpload(streamResponse.data.buffer);
return res.send(uploadResponse);
};
const busboyStream = async (busboy: any, req: Request): Promise<any> {
return new Promise((resolve, reject) => {
try {
const fileData: any[] = [];
let fileBuffer: Buffer;
busboy.on('file', async (fieldName: any, file: any, fileName: any, encoding: any, mimetype: any) => {
// ! File is missing in the request
if (!fileName)
reject("File not found!");
let totalBytes: number = 0;
file.on('data', (chunk: any) => {
fileData.push(chunk);
// ! given code is only for logging purpose
// TODO will remove once project is live
totalBytes += chunk.length;
console.log('File [' + fieldName + '] got ' + chunk.length + ' bytes');
});
file.on('error', (err: any) => {
reject(err);
});
file.on('end', () => {
fileBuffer = Buffer.concat(fileData);
});
});
// ? Haa, finally file parsing wen't well
busboy.on('finish', () => {
const responseData: ResponseDto = {
status: true, message: "File parsing done", data: {
buffer: fileBuffer,
metaData
}
};
resolve(responseData)
console.log('Done parsing data! -> File uploaded');
});
req.pipe(busboy);
} catch (error) {
reject(error);
}
});
}
const s3FileUpload = async (fileData: any): Promise<ResponseDto> {
try {
const params: any = {
Bucket: <BUCKET_NAME>,
Key: <path>,
Body: fileData,
ContentType: <content_type>,
ServerSideEncryption: "AES256",
};
const command = new PutObjectCommand(params);
const uploadResponse: any = await this.S3.send(command);
return { status: true, message: "File uploaded successfully", data: uploadResponse };
} catch (error) {
const responseData = { status: false, message: "Monitor connection failed, please contact tech support!", error: error.message };
return responseData;
}
}
In the AWS S3 service in a single PUT operation, you can upload a single object up to 5 GB in size.
To upload > 5GB files to S3 you need to use the multipart upload S3 API, and also the AwsS3Multipart Uppy API.
Check your upload code to understand if you are using AWSS3Multipart correctly, setting the limit properly for example, in this case a limit between 5 and 15 is recommended.
import AwsS3Multipart from '#uppy/aws-s3-multipart'
uppy.use(AwsS3Multipart, {
limit: 5,
companionUrl: 'https://uppy-companion.myapp.net/',
})
Also, check this issue on Github Uploading a large >5GB file to S3 errors out #1945
If you're getting Error: request entity too large in your Companion server logs I fixed this in my Companion express server by increasing the body-parser limit:
app.use(bodyparser.json({ limit: '21GB', type: 'application/json' }))
This is a good working example of Uppy S3 MultiPart uploads (without this limit increased): https://github.com/jhanitesh10/uppy
I'm able to upload files up to a (self-imposed) limit of 20GB using this code.

GCP cloud function to suspend and resume the GCP instances

We can use GCP cloud functions to start and stop the GCP instances but I need to work on scheduled suspend and resume of GCP instances using cloud function and scheduler.
From GCP documentation, I got that we can do start and stop using cloud functions available below
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/functions/scheduleinstance
Do we have same node JS or other language Pcgks available to suspend and resume GCP instances?
If not can we create our own for suspend/resume.
When I tried one I got below error
"TypeError: compute.zone(...).vm(...).resume is not a function
Edit, thanks Chris and Guillaume, after going through you links i have edited my code and below is my index.js file now.
For some reason when I do
gcloud functions deploy resumeInstancePubSub --trigger-topic resume-instance --runtime nodejs10 --allow-unauthenticated
i always get
Function 'resumeInstancePubSub1' is not defined in the provided module.
resumeInstancePubSub1 2020-09-04 10:57:00.333 Did you specify the correct target function to execute?
i have not worked on Node JS Or JS before, I was expecting something similar to start/stop documentation which I could make work easily using below git repo
https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
My index.js file,
// BEFORE RUNNING:
// ---------------
// 1. If not already done, enable the Compute Engine API
// and check the quota for your project at
// https://console.developers.google.com/apis/api/compute
// 2. This sample uses Application Default Credentials for authentication.
// If not already done, install the gcloud CLI from
// https://cloud.google.com/sdk and run
// `gcloud beta auth application-default login`.
// For more information, see
// https://developers.google.com/identity/protocols/application-default-credentials
// 3. Install the Node.js client library by running
// `npm install googleapis --save`
const {google} = require('googleapis');
var compute = google.compute('beta');
authorize(function(authClient) {
var request = {
// Project ID for this request.
project: 'my-project', // TODO: Update placeholder value.
// The name of the zone for this request.
zone: 'my-zone', // TODO: Update placeholder value.
// Name of the instance resource to resume.
instance: 'my-instance', // TODO: Update placeholder value.
resource: {
// TODO: Add desired properties to the request body.
},
auth: authClient,
};
exports.resumeInstancePubSub = async (event, context, callback) => {
try {
const payload = _validatePayload(
JSON.parse(Buffer.from(event.data, 'base64').toString())
);
const options = {filter: `labels.${payload.label}`};
const [vms] = await compute.getVMs(options);
await Promise.all(
vms.map(async (instance) => {
if (payload.zone === instance.zone.id) {
const [operation] = await compute
.zone(payload.zone)
.vm(instance.name)
.resume();
// Operation pending
return operation.promise();
}
})
);
// Operation complete. Instance successfully started.
const message = `Successfully started instance(s)`;
console.log(message);
callback(null, message);
} catch (err) {
console.log(err);
callback(err);
}
};
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
// TODO: Change code below to process the `response` object:
console.log(JSON.stringify(response, null, 2));
});
});
function authorize(callback) {
google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
}).then(client => {
callback(client);
}).catch(err => {
console.error('authentication failed: ', err);
});
}
Here and here is the documetation for the new beta verison of the api. You can see that you can suspend an instance like:
compute.instances.suspend(request, function(err, response) {
if (err) {
console.error(err);
return;
}
And you can resume a instance in a similar way:
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
GCP recently added "create schedule" feature to start and stop the VM instances based on the configured schedule.
More details can be found at
https://cloud.google.com/compute/docs/instances/schedule-instance-start-stop#managing_instance_schedules

How to upload files to S3 bucket from a docker container?

I have containerized my project that uploads files to S3.
Everything was working fine when I was uploading the files from my local file system.
I just mounted my container to my local file system, and then uploading stopped.
The following is the piece of function for uploading the files to the S3 bucket:
// AWS configuration
AWS.config.update({ region: 'ap-northeast-1' });
let s3 = new AWS.S3({ apiVersion: '2006-03-01' });
.
.
.
function s3uploader(uploadingVideo) {
// call S3 to retrieve upload file to specified bucket
let uploadParams = { Bucket: "my-bucket", Key: '', Body: '' };
let file = uploadingVideo;
console.log(file);
// Configure the file stream and obtain the upload parameters
let fileStream = fs.createReadStream(file);
fileStream.on('error', function (err) {
console.log('File Error', err);
});
uploadParams.Body = fileStream;
uploadParams.Key = path.basename(file);
// call S3 to retrieve upload file to specified bucket
s3.upload(uploadParams, function (err, data) {
console.log("Hello World!")
if (err) {
console.log("Error", err);
} if (data) {
console.log("Upload Success", data.Location);
}
});
}
At the moment, nothing happens when running the container. No error, not even the "Hello World!" part, so I think that s3 is not being called at the first place.
I have found a similar question here, but it wasn't helpful to my case.
I also thought of maybe installing the aws cli from dockerfile but also didn't succeed with that.
What is exactly missing here, and how to fix it?

jest-haste-map: Haste module naming collision (AWS, RN)

I have a React-native project with AWS Amplify.
In the root directory, there is an amplify folder.
Inside this amplify folder, there is a backend folder, and a #current-cloud-backend folder.
These two are basically identical.
When I try to start my project with npm run start I receive this error:
The following files share their name; please adjust your hasteImpl:
* <rootDir>/amplify-backup/backend/function/cxLoyaltyMainAppVerifyAuthChallengeResponse/src/package.json
* <rootDir>/amplify/#current-cloud-backend/function/cxLoyaltyMainAppVerifyAuthChallengeResponse/src/package.json
And it is complaining that inside these two folders, each lambda function has it's own package.json, in which they are named identical to their counterpart folder.
What I have done so far
I have found many people mentioning to put modulePathIgnorePatterns: ['<rootDir>/build'] inside of my root package.json under jest. Some also say to put it inside of jest.config.js which I cannot find anywhere.
I have also tried creating a root rn-cli.config.js and added
module.exports = {
resolver: {
blacklistRE: blacklist( [
/node_modules\/.*\/node_modules\/react-native\/.*/,
] )
},
};
which also does not work.
I am really running out of ideas here, anyone have any ideas? Thank you
I am using the Expo CLI and was having the same problem.
The solution that worked for me:
metro.config.js at the root directory. (instead of rn-cli.config.js)
const blacklist = require('metro-config/src/defaults/blacklist');
module.exports = {
resolver: {
blacklistRE: blacklist([/#current-cloud-backend\/.*/]),
},
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: false,
},
}),
},
};
UPDATE 2022! Just change the folder of the previous answer. It´s no longer defaults/blacklist, but defaults/exclusionList. So the solution:
I am using the Expo CLI and was having the same problem.
The solution that worked for me:
metro.config.js at the root directory. (instead of rn-cli.config.js)
const blacklist = require('metro-config/src/defaults/exclusionList');
module.exports = {
resolver: {
blacklistRE: blacklist([/#current-cloud-backend\/.*/]),
},
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: false,
},
}),
},
};
Adding the below snippet in the metro.config.js file worked for me
I am using:
react-native-cli: 2.0.1
react-native: 0.63.4
amplify: 5.3.0
const exclusionList = require('metro- config/src/defaults/exclusionList');
module.exports = {
resolver: {
blacklistRE: exclusionList([/#current-cloud-backend\/.*/]),
},
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: false,
},
}),
},
};
Also, you will need to install metro-config as a dependency by running npm i -D metro-config
In my case I have a project managed with Expo and a rule to resolve files of type cjs. I only had to include the line:
defaultConfig.resolve.blacklistRE = blacklist([/#current-cloud-backend/.*/]);
Final result:
const { getDefaultConfig } = require("#expo/metro-config");
const blacklist = require('metro-config/src/defaults/exclusionList');
const defaultConfig = getDefaultConfig(__dirname);
defaultConfig.resolve.assetExts.push("cjs");
defaultConfig.resolve.blacklistRE = blacklist([/#current-cloud-backend\/.*/]);
module.exports = defaultConfig;

how to set up a layer for node-oracledb package in aws lambda environment?

I want to connect to oracle database hosted in RDS inside aws lambda nodejs runtime . after research i found out that i need to download node-oracledb package and create a layer for node module and binary lib files. so i created folder structure as shown below, and zip folder and uploaded to aws layer and attached layer to lambda, however i get "errorMessage": "Cannot find module 'oracledb' any clue Why AWS node cannot find module?, thank you
Lambda-Layer-1(version 1)
|
|__lib
| |__libaio.so.1
| |__libclntsh.so.12.1
| |__libclntschcore.so.12.1
| |__libipc1.so
| |__libmql1.so
| |__libnnz12.so
| |__libociicus.so
| |__libons.so
|
|__nodejs
|
|__node_modules
|
|__oracledb
Error from lambda:
"errorMessage": "Cannot find module 'oracledb'",
"errorType": "Error",
"stackTrace": [
"Module.require (module.js:596:17)",
"require (internal/module.js:11:18)",
"Object.<anonymous> (/var/task/src/services/oracleDb.service.js:10:18)",
"Module._compile (module.js:652:30)",
"Object.Module._extensions..js (module.js:663:10)",
"Module.load (module.js:565:32)",
AWS runtime:
Nodejs:8.10
node-oracledb:"3.1.2"
code:
const oracledb = require("oracledb");
let connection;
static async init() {
try {
if (!connection) {
const connectionAtrribute = {
connectionString: 'uat-*******',
password: '*******',
user: '*******'
};
connection = await oracledb.getConnection(connectionAtrribute);
}
}
catch (error) {
console.log('ERROR', JSON.stringify(error));
}
}
There is an another library “oracledb_for_lambda” to connect oracle DB from lambda
npm i oracledb-for-lambda
Now, In the Project folder, Create a folder named “nodejs” and You need to move the “node_modules” folder into this “nodejs” folder. Then, Copy the “lib” folder inside “/node_modules/oracledb-for-lambda” and paste it outside in the main project directory.
Finally, you will get a folder structure like the below image.
That’s it, Zip the files inside the folder and Upload the Zip to S3
And you can connect using below code
'use strict';
var os = require('os');
var fs = require('fs');
var oracledb = require('oracledb-for-lambda');
exports.handler = async (event, context) => {
let str_host = os.hostname() + ' localhost\n';
fs.writeFileSync(process.env.HOSTALIASES, str_host, function(err) {
if (err) throw err;
});
var connAttr = {
user: process.env.USERNAME,
password: process.env.PASSWORD,
connectString: process.env.CONNECTION_STRING
};
const promise = new Promise(function(resolve, reject) {
oracledb.getConnection(connAttr, function(err, connection) {
if (err) {
reject({
status: "ERROR"
});
}
resolve({
status: "SUCCESS"
});
});
});
return promise;
}