How to copy files from gdrive to s3 bucket using google scripts? - amazon-web-services

I created a Google Form with a linked Google Spreadsheet. I would like that everytime someone submits the form, the spreadsheet is copied to an s3 bucket in AWS. To do so, I just got started with Google Scripts. I managed to get the trigger part working on form submit but I am struggling to understand the readme of this GitHub project to upload to s3.
function setUpTrigger() {
ScriptApp.newTrigger('copyDataS3')
.forForm('1SK-2Ow63vs_TaoF54UjSgn35FL7F8_ANHDTOOiTabMM')
.onFormSubmit()
.create();
}
function copyDataS3() {
// https://github.com/viuinsight/google-apps-script-for-aws
// I do not understand where should I place aws.js and util.js.
// Should I do File -> New -> Script file and copy paste the contents? Should the file be .js or .gs?
S3.init("MY_ACCESS_KEY", "MY_SECRET_KEY");
// if I wanwt to copy an spreadsheet with the following id, what should go into "object" below?
var ssID = "SPREADSHEET_ID";
S3.putObject(bucketName, objectName, object, region)
}

I believe your goal as follows.
You want to send Google Spreadsheet to s3 bucket as a CSV data using Google Apps Script.
Modification points:
When I saw google-apps-script-for-aws of the library you are using, I noticed that the data is requested as the string. I thought that in this case, your CSV data might be able to be directly sent. But for example, when you want to sent a binary data, it will occur an error. So in this answer, I would like to propose the modified script of 2 patterns.
I thought that the situation might similar to this thread. But I noticed that you are using the different library from the thread. So I post this answer.
Pattern 1:
In this pattern, it supposes that only the text data is sent. It's like the CSV data in your replying. In this case, I think that it is not required to modify the library.
Modified script:
S3.init("MY_ACCESS_KEY", "MY_SECRET_KEY"); // Please set this.
var spreadsheetId = "###"; // Please set the Spreadsheet ID.
var sheetName = "Sheet1"; // Please set the sheet name.
var region = "###"; // Please set this.
var csv = SpreadsheetApp
.openById(spreadsheetId)
.getSheetByName(sheetName)
.getDataRange()
.getValues() // or .getDisplayValues()
.map(r => r.join(","))
.join("\n");
var blob = Utilities.newBlob(csv, MimeType.CSV, sheetName + ".csv");
S3.putObject("bucketName", "test.csv", blob, region);
Pattern 2:
In this pattern, it supposes that both the text data and binary data are sent. In this case, it is required to also modify the library side.
For google-apps-script-for-aws
Please modify the line 110 in s3.js as follows.
From:
var content = object.getDataAsString();
To:
var content = object.getBytes();
And, please modify the line 146 in s3.js as follows.
From:
Utilities.DigestAlgorithm.MD5, content, Utilities.Charset.UTF_8));
To:
Utilities.DigestAlgorithm.MD5, content));
For Google Apps Script:
In this case, please give the blob to S3.putObject as follows.
Script:
S3.init("MY_ACCESS_KEY", "MY_SECRET_KEY"); // Please set this.
var fileId = "###"; // Please set the file ID.
var region = "###"; // Please set this.
var blob = DriveApp.getFileById(fileId).getBlob();
S3.putObject("bucketName", blob.getName(), blob, region);
References:
viuinsight/google-apps-script-for-aws
Class UrlFetchApp
computeDigest(algorithm, value)
PutObject

Related

Postman accessing the stored results in the database leveldb

So I have a set of results in Postman from a runner on a collection using some data file for iterations - I have the stored data from the runner in the Postman app on Linux, but I want to know how I can get hold of the data. There seems to be a database hidden away in the ~/.config directory (/Desktop/file__0.indexeddb.leveldb) - that looks like it has the data from the results there.
Is there anyway that I can get hold of the raw data - I want to be able to save the results from the database and not faff around with running newman or hacking a server to post the results and then save, I already have 20000 results in a collection. I want to be able to get the responseData from each post and save it to a file - I will not execute the posts again, I need to just work out a way
I've tried KeyLord, FastNoSQL (this crashes), levelDBViewer(Jar), but not having any luck here.
Any suggestions?
inline 25024 of runner.js a simple yet hack for small numbers of results I can do the following
RunnerResultsRequestListItem = __WEBPACK_IMPORTED_MODULE_2_pure_render_decorator___default()(_class = class RunnerResultsRequestListItem extends __WEBPACK_IMPORTED_MODULE_0_react__["Component"] {
constructor(props) {
super(props);
var text = props.request.response.body,
blob = new Blob([text], { type: 'text/plain' }),
anchor = document.createElement('a');
anchor.download = props.request.ref + ".txt";
anchor.href = (window.webkitURL || window.URL).createObjectURL(blob);
anchor.dataset.downloadurl = ['text/plain', anchor.download, anchor.href].join(':');
anchor.click();
it allows me to save but obviously I have to click save for now, anyone know how to automate the saving part - please add something here!

Postman - How to store multiple values from a response header in a var or just be able to see them

Using a GET in postman with the URL posted below, I am able to store the entire response header in question with all of its data in a var, the issue for me is how do I verify the pieces of data inside that var
here is my URL
http://localhost/v1/accounts?pageNumber=1&pageSize=2
[
using postman I am able to get the above in a var
var XPaginationData = postman.getResponseHeader(pm.globals.get("PaginationHeader"));
pm.globals.set("XPaginationData", XPaginationData);
is there a way to get the individual values inside the response header X-Pagination stored in a different var to assert later
using this in postman
pm.globals.set("XPaginationData", JSON.stringify(pm.response.headers));
console.log(JSON.parse(pm.globals.get('XPaginationData')));
console.log(JSON.parse(pm.globals.get('XPaginationData'))[4].value);
I get
how would i go about getting "TotalCount" for example
BIG EDIT:
thanks to a coworker, the solution is this
//Filtering Response Headers to get PaginationHeader
var filteredHeaders = pm.response.headers.all()
.filter(headerObj => {
return headerObj.key == pm.globals.get("PaginationHeader");
});
// JSON parse the string of the requested response header
// from var filteredHeaders
var paginationObj = filteredHeaders[0].value;
paginationObj = JSON.parse(paginationObj);
//Stores global variable for nextpageURL
var nextPageURL = paginationObj.NextPageLink;
postman.setGlobalVariable("nextPageURL", nextPageURL);
You could use JSON.stringfy() when saving the environment variable and then use JSON.parse() to access the different properties or property that you need.
If you set a global variable for the response headers like this:
pm.globals.set('PaginationHeader', JSON.stringify(pm.response.headers))
Then you can get any of the data from the variable like this:
console.log(JSON.parse(pm.globals.get('PaginationHeader'))[1].value)
The image shows how this works in Postman. The ordering of the headers returned in the console is inconsistent so you will need to find the correct one to extract data from the X-Pagination header
Looks like an issue with Postman itself.
The only solution that worked for me was to stringify & parse the JSON again, like this:
var response = JSON.parse(JSON.stringify(res))
After doing this, the headers and all other properties are accessible as expected.

How to generate multiple sheets in one excel file using ember-cli-data-export

I am able to generate one sheet only with the ember-cli-data-export through the following javascript function of:
this.get('excel').export(data, {sheetName: 'Overview', fileName: 'test.xlsx'});
I have tried the following way (below) to generate multiple sheets but it is not working.
this.get('excel').export([data1, data2], {sheetName: ['Overview', ,Next'], fileName: 'test.xlsx'});
How do I generate multiple sheets in one excel file test.xlsx using ember-cli-export?
ember-cli-data-export isn't able to export multiple worksheets in a single file.
If you take a look at the .export function in the repository you'll see (comments my own)
var wb = new Workbook(), ws = sheet_from_array_of_arrays(data); //Create a single workbook
wb.SheetNames.push(options.sheetName); //take the sheetname out of the options and create a new sheet
wb.Sheets[options.sheetName] = ws; // Push mapped data to the new sheet
var wbout = XLSX.write(wb, {bookType:'xlsx', bookSST:true, type: 'binary'}); //Store the workbook
saveAs(new Blob([s2ab(wbout)],{type:"application/octet-stream"}), options.fileName); //Prompt for save
If you need multi-sheet saving, you'll need to fork the repo and make modifications to handle multiple sheets or use the xlsx package to write it yourself.

Can I convert browser generated image blob to image file for upload?

I'm using fabric.js to dynamically create textures in Threes.js, and I need to save the textures to AWS. I'm using meteor-slingshot, which normally takes images passed in through a file selector input. Here's the uploader:
var uploader = new Slingshot.Upload("myFileUploads");
uploader.send(document.getElementById('input').files[0], function (error, downloadUrl) {
if (error) {
console.error('Error uploading', uploader.xhr.response);
alert (error);
}
else {
Meteor.users.update(Meteor.userId(), {$push: {"profile.files":downloadUrl}});
}
});
Uploading works fine from the drive ... but I'm generating my files in the browser, not getting them from the drive. Instead, they are generated from a canvas element with the following method:
generateTex: function(){
var canvTex = document.getElementById('texture-generator');
var canvImg = canvTex.toDataURL('image/jpeg');
var imageNew = document.createElement( 'img' );
imageNew.src = canvImg;
}
This works great as well. If I console.log the imageNew, I get my lovely image with base 64 encoding:
<img src=​"data:​image/​jpeg;​base64,/​9j/​
4AAQSkZJRgABAQAAAQABAAD/2wBDAAMCAgICAgMCAgID
//....carries on to 15k or so characters
If I console.log a file object added from the drive via filepicker ( not generated from a canvas ), I can see what the file object should look like:
file{
lastModified: 1384216556000
lastModifiedDate: Mon Nov 11 2013 16:35:56 GMT-0800 (PST)
name: "filename.png"
size: 3034
type: "image/png"
webkitRelativePath: ""
__proto__: File
}
But I can't create a file from the blob for upload, because there is no place in the file object to add the actual data.
To sum up I can:
Generate an image blob and display it in a dom element
Upload files from the drive using meteor-slingshot
inspect the existing file object
But I don't know how to convert the blob into a named file, so I can pass it to the uploader.
I don't want to download the image, (there are answers for that), I want to upload it. There is a "chrome only" way to do this with the filesystem API but I need something cross browser (and eventually cross platform). If someone could help me with this, I would have uncontainable joy.
Slingshot supports blobs just as well as files: https://github.com/CulturalMe/meteor-slingshot/issues/22
So when you have a canvas object called canvTex and a Slingshot.Upload instance called uploader, then uploading the canvas image is as easy as:
canvTex.toBlob(function (blob) {
uploader.send(blob, function (error, downloadUrl) {
//...
});
});
Because blobs have no names, you must take that into account when defining your directive. Do not attempt to generate a key based on the name of the file.

TideSDK How to save a cookie's information to be accessed in different file?

I am trying to use TideSDK's Ti.Network to set the name and value of my cookie.
But how do I get this cookie's value from my other pages?
var httpcli;
httpcli = Ti.Network.createHTTPCookie();
httpcli.setName(cname); //cname is my cookie name
httpcli.setValue(cvalue); //cvalue is the value that I am going to give my cookie
alert("COOKIE value is: "+httpcli.getValue());
How would I retrieve this cookie value from my next page? Thank you in advance!
ok, there are a lot of ways to create storage content on tidesdk. cookies could be one of them, but not necessary mandatory.
In my personal oppinion, cookies are too limited to store information, so I suggest you to store user information in a JSON File, so you can store from single pieces of information to large structures (depending of the project). Supposing you have a project in which the client have to store the app configuration like 'preferred path' to store files or saving strings (such first name, last name) you can use Ti.FileSystem to store and read such information.:
in the following example, I use jQuery to read a stored json string in a file:
File Contents (conf.json):
{
"fname" : "erick",
"lname" : "rodriguez",
"customFolder" : "c:\\myApp\\userConfig\\"
}
Note : For some reason, Tidesdk cannot parse a json structure like because it interprets conf.json as a textfile, so the parsing will work if you remove all the tabs and spaces:
{"fname":"erick","lname":"rodriguez","customFolder":"c:\\myApp\\userConfig\\"}
now let's read it.... (myappfolder is the path of your storage folder)
readfi = Ti.Filesystem.getFile(myappfolder,"conf.json");
Stream = Ti.Filesystem.getFileStream(readfi);
Stream.open(Ti.Filesystem.MODE_READ);
contents = Stream.read();
contents = JSON.parse(contents.toString);
console.log(contents);
now let's store it....
function saveFile(pathToFile) {
var readfi,Stream,contents;
readfi = Ti.Filesystem.getFile(pathToFile);
Stream = Ti.Filesystem.getFileStream(readfi);
Stream.open(Ti.Filesystem.MODE_READ);
contents = Stream.read();
return contents.toString();
};
//if a JSON var is defined into js, there is no problem
var jsonObject = {
"fname":"joe",
"lname":"doe",
"customFolder" : "c:\\joe\\folder\\"
}
var file = pathToMyAppStorage + "\\" + "conf.json";
var saved = writeTextFile(file,JSON.stringify(jsonObject));
console.log(saved);