Rename an image file when uploading in ionic 2 - ionic2

i stumbled across this function on the internet.
private createFileName() {
var d = new Date(),
n = d.getTime(),
newFileName = n + ".jpg";
return newFileName;
}
It's purpose is to rename an image file. The questions i have are
Does this restrict image upload to jpg only or does it convert it to
jpg.
If it's the former how can i get the image extension before upload
and modify private createFileName(ext) function like so, so as to upload
every image type.
If it would help here is the full code
lastImage: string = null;
loading: Loading;
public takePicture(sourceType) {
// Create options for the Camera Dialog
var options = {
quality: 100,
sourceType: sourceType,
saveToPhotoAlbum: false,
correctOrientation: true
};
// Get the data of an image
this.camera.getPicture(options).then((imagePath) => {
// Special handling for Android library
if (this.platform.is('android') && sourceType === this.camera.PictureSourceType.PHOTOLIBRARY) {
this.filePath.resolveNativePath(imagePath)
.then(filePath => {
let correctPath = filePath.substr(0, filePath.lastIndexOf('/') + 1);
let currentName = imagePath.substring(imagePath.lastIndexOf('/') + 1, imagePath.lastIndexOf('?'));
this.copyFileToLocalDir(correctPath, currentName, this.createFileName());
});
} else {
var currentName = imagePath.substr(imagePath.lastIndexOf('/') + 1);
var correctPath = imagePath.substr(0, imagePath.lastIndexOf('/') + 1);
this.copyFileToLocalDir(correctPath, currentName, this.createFileName());
}
}, (err) => {
this.presentToast('Error while selecting image.');
});
}
private copyFileToLocalDir(namePath, currentName, newFileName) {
this.file.copyFile(namePath, currentName, cordova.file.dataDirectory, newFileName).then(success => {
this.lastImage = newFileName;
}, error => {
this.presentToast('Error while storing file.');
});
}
private presentToast(text) {
let toast = this.toastCtrl.create({
message: text,
duration: 3000,
position: 'top'
});
toast.present();
}
// Always get the accurate path to your apps folder
public pathForImage(img) {
if (img === null) {
return '';
} else {
return cordova.file.dataDirectory + img;
}
}
public uploadImage() {
// Destination URL
var url = "http://my_url/uploads.php";
// File for Upload
var targetPath = this.pathForImage(this.lastImage);
// File name only
var filename = this.lastImage;
var options = {
fileKey: "file",
fileName: filename,
chunkedMode: false,
mimeType: "multipart/form-data",
params : {'fileName': filename}
};
const fileTransfer: TransferObject = this.transfer.create();
this.loading = this.loadingCtrl.create({
content: 'Uploading...',
});
this.loading.present();
// Use the FileTransfer to upload the image
fileTransfer.upload(targetPath, url, options).then(data => {
this.loading.dismissAll()
this.presentToast('Image succesfully uploaded.');
}, err => {
this.loading.dismissAll()
this.presentToast('Error while uploading file.');
});
}

1- It does not restrict to "jpg", and it does not convert to "jpg", as the method name says: create Filename, it creates a string, with the filename, just that.
2- The image extension will be jpg, as it is the default camera encodingType, and it seems like you are taking pictures using the camera. You can change the encodingtype in the camera options.
You can find all the camera options in https://ionicframework.com/docs/native/camera/

Related

How to move array of images from cloudinary to s3 bucket

I want to do two things in my code
I want to move the array of images from cloudinary to s3 bucket
Replace the url with the s3 bucket url
I was able to replace the url but I am not able to move those images from cloudinary to s3
Here is that API
const userSchemaPortfolio = async (req, res) => {
let userPortfolio = (await userSchema.find({}, { portfolio: 1 })).forEach(async
function (docs) {
let updatedPortfolio = []
if (docs.portfolio.length > 0) {
// console.log(docs.portfolio.length, typeof (docs.portfolio), Object.keys(docs.portfolio).length, docs._id)
for (i = 0; i < docs.portfolio.length; i++) {
if (docs.portfolio != "") {
// console.log(docs)
let endPoint = new Date().getTime() + '_' + docs.portfolio[i].toString().split('/').pop();
// console.log(endPoint, docs.portfolio[i])
let imageURL = docs.portfolio[i]
// console.log(imageURL)
let resImg = await fetch(imageURL);
console.log(resImg.url)
let blob = await resImg.buffer()
// console.log(blob)
var uploadedImage = s3.upload({
Bucket: process.env.AWS_BUCKET,
Key: endPoint,
Body: blob,
}).promise()
// console.log(uploadedImage)
var portfolio = "https://**************.amazonaws.com/" + endPoint;
// console.log("....>>>>>",portfolio)
updatedPortfolio.push(portfolio);
// console.log(updatedPortfolio)
// return;
}
}
// console.log(updatedPortfolio)
return;
let updateData = { portfolio: updatedPortfolio }
await userSchema.updateOne({ _id: docs._id }, { $set: updateData });
// console.log(updatedPortfolio, docs._id, docs.portfolio)
}
});
let user = await userSchema.find({}, { portfolio: 1 })
// console.log(chat)
res.send(user)
}

How to choose different Lambda function while Start Streaming to Amazon Elasticsearch Service

Following this Streaming CloudWatch Logs Data to Amazon Elasticsearch Service, it's working fine to stream cloud watch log to ELK having one log group and one Lambda function.
But now I want to change target lambda function for my other logs group, but I am not able to do that as there is no option in AWS console.
Any Help will be appreciated.
Thanks
I was streaming to ELK using the AWS console option which is Start Streaming to Amazon Elasticsearch Service, But I failed to change or choose different lambda function as there is only lambda function can be selected for any log group using this option.
So, I create new lambda function and set stream target to AWS lambda function,
Here is the code that all you need, Node version for lambda function is 4.* as it was some issue with the new version but the pulse point is it does not require any extra NPM packages.
// v1.1.2
var https = require('https');
var zlib = require('zlib');
var crypto = require('crypto');
var endpoint = 'search-my-test.us-west-2.es.amazonaws.com';
exports.handler = function(input, context) {
// decode input from base64
var zippedInput = new Buffer(input.awslogs.data, 'base64');
// decompress the input
zlib.gunzip(zippedInput, function(error, buffer) {
if (error) { context.fail(error); return; }
// parse the input from JSON
var awslogsData = JSON.parse(buffer.toString('utf8'));
// transform the input to Elasticsearch documents
var elasticsearchBulkData = transform(awslogsData);
// skip control messages
if (!elasticsearchBulkData) {
console.log('Received a control message');
context.succeed('Control message handled successfully');
return;
}
// post documents to the Amazon Elasticsearch Service
post(elasticsearchBulkData, function(error, success, statusCode, failedItems) {
console.log('Response: ' + JSON.stringify({
"statusCode": statusCode
}));
if (error) {
console.log('Error: ' + JSON.stringify(error, null, 2));
if (failedItems && failedItems.length > 0) {
console.log("Failed Items: " +
JSON.stringify(failedItems, null, 2));
}
context.fail(JSON.stringify(error));
} else {
console.log('Success: ' + JSON.stringify(success));
context.succeed('Success');
}
});
});
};
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null;
}
var bulkRequestBody = '';
payload.logEvents.forEach(function(logEvent) {
var timestamp = new Date(1 * logEvent.timestamp);
// index name format: cwl-YYYY.MM.DD
var indexName = [
'prod-background-wo-' + timestamp.getUTCFullYear(), // year
('0' + (timestamp.getUTCMonth() + 1)).slice(-2), // month
('0' + timestamp.getUTCDate()).slice(-2) // day
].join('.');
var source = buildSource(logEvent.message, logEvent.extractedFields);
source['response_time'] = source["end"] - source["start"];
source['#id'] = logEvent.id;
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString();
source['#message'] = logEvent.message;
source['#owner'] = payload.owner;
source['#log_group'] = payload.logGroup;
source['#log_stream'] = payload.logStream;
var action = { "index": {} };
action.index._index = indexName;
action.index._type = payload.logGroup;
action.index._id = logEvent.id;
bulkRequestBody += [
JSON.stringify(action),
JSON.stringify(source),
].join('\n') + '\n';
});
return bulkRequestBody;
}
function buildSource(message, extractedFields) {
if (extractedFields) {
var source = {};
for (var key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
var value = extractedFields[key];
if (isNumeric(value)) {
source[key] = 1 * value;
continue;
}
jsonSubString = extractJson(value);
if (jsonSubString !== null) {
source['$' + key] = JSON.parse(jsonSubString);
}
source[key] = value;
}
}
return source;
}
jsonSubString = extractJson(message);
if (jsonSubString !== null) {
return JSON.parse(jsonSubString);
}
return {};
}
function extractJson(message) {
var jsonStart = message.indexOf('{');
if (jsonStart < 0) return null;
var jsonSubString = message.substring(jsonStart);
return isValidJson(jsonSubString) ? jsonSubString : null;
}
function isValidJson(message) {
try {
JSON.parse(message);
} catch (e) { return false; }
return true;
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
}
function post(body, callback) {
var requestParams = buildRequest(endpoint, body);
var request = https.request(requestParams, function(response) {
var responseBody = '';
response.on('data', function(chunk) {
responseBody += chunk;
});
response.on('end', function() {
var info = JSON.parse(responseBody);
var failedItems;
var success;
if (response.statusCode >= 200 && response.statusCode < 299) {
failedItems = info.items.filter(function(x) {
return x.index.status >= 300;
});
success = {
"attemptedItems": info.items.length,
"successfulItems": info.items.length - failedItems.length,
"failedItems": failedItems.length
};
}
var error = response.statusCode !== 200 || info.errors === true ? {
"statusCode": response.statusCode,
"responseBody": responseBody
} : null;
callback(error, success, response.statusCode, failedItems);
});
}).on('error', function(e) {
callback(e);
});
request.end(requestParams.body);
}
function buildRequest(endpoint, body) {
var endpointParts = endpoint.match(/^([^\.]+)\.?([^\.]*)\.?([^\.]*)\.amazonaws\.com$/);
var region = endpointParts[2];
var service = endpointParts[3];
var datetime = (new Date()).toISOString().replace(/[:\-]|\.\d{3}/g, '');
var date = datetime.substr(0, 8);
var kDate = hmac('AWS4' + process.env.AWS_SECRET_ACCESS_KEY, date);
var kRegion = hmac(kDate, region);
var kService = hmac(kRegion, service);
var kSigning = hmac(kService, 'aws4_request');
var request = {
host: endpoint,
method: 'POST',
path: '/_bulk',
body: body,
headers: {
'Content-Type': 'application/json',
'Host': endpoint,
'Content-Length': Buffer.byteLength(body),
'X-Amz-Security-Token': process.env.AWS_SESSION_TOKEN,
'X-Amz-Date': datetime
}
};
var canonicalHeaders = Object.keys(request.headers)
.sort(function(a, b) { return a.toLowerCase() < b.toLowerCase() ? -1 : 1; })
.map(function(k) { return k.toLowerCase() + ':' + request.headers[k]; })
.join('\n');
var signedHeaders = Object.keys(request.headers)
.map(function(k) { return k.toLowerCase(); })
.sort()
.join(';');
var canonicalString = [
request.method,
request.path, '',
canonicalHeaders, '',
signedHeaders,
hash(request.body, 'hex'),
].join('\n');
var credentialString = [ date, region, service, 'aws4_request' ].join('/');
var stringToSign = [
'AWS4-HMAC-SHA256',
datetime,
credentialString,
hash(canonicalString, 'hex')
] .join('\n');
request.headers.Authorization = [
'AWS4-HMAC-SHA256 Credential=' + process.env.AWS_ACCESS_KEY_ID + '/' + credentialString,
'SignedHeaders=' + signedHeaders,
'Signature=' + hmac(kSigning, stringToSign, 'hex')
].join(', ');
return request;
}
function hmac(key, str, encoding) {
return crypto.createHmac('sha256', key).update(str, 'utf8').digest(encoding);
}
function hash(str, encoding) {
return crypto.createHash('sha256').update(str, 'utf8').digest(encoding);
}

AWS Lambda reach memory limit

i use this Lambda function to generate thumbnails on the fly. But i get the following error:
REPORT RequestId: 9369f148-2a85-11e7-a571-5f1e1818669e Duration: 188.18 ms Billed Duration: 200 ms Memory Size: 1536 MB Max Memory Used: 1536 MB
AND...
RequestId: 9369f148-2a85-11e7-a571-5f1e1818669e Process exited before completing request
So i think i reach the max Memory Limit. Without the function "uploadRecentImage()" it works. But if i add a new size to imgVariants[] i will also hit the Memory Limit.
I think the way the function handle the imgVariants (each loop) will cause this but i don't know to make it better.
I will be grateful for any help.
Here is my function:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({
imageMagick: true
}); // use ImageMagick
var util = require('util');
// configuration as code - add, modify, remove array elements as desired
var imgVariants = [
{
"SIZE": "Large1",
"POSTFIX": "-l",
"MAX_WIDTH": 6000,
"MAX_HEIGHT": 6000,
"SIZING_QUALITY": 75,
"INTERLACE": "Line"
},
{
"SIZE": "Large1",
"POSTFIX": "-l",
"MAX_WIDTH": 1280,
"MAX_HEIGHT": 1280,
"SIZING_QUALITY": 75,
"INTERLACE": "Line"
},
{
"SIZE": "Large1",
"POSTFIX": "-l",
"MAX_WIDTH": 500,
"MAX_HEIGHT": 500,
"SIZING_QUALITY": 75,
"INTERLACE": "Line"
},
{
"SIZE": "Large1",
"POSTFIX": "-l",
"MAX_WIDTH": 100,
"MAX_HEIGHT": 100,
"SIZING_QUALITY": 75,
"INTERLACE": "Line"
}
];
var DST_BUCKET_POSTFIX = "resized";
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = function (event, context) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {
depth: 5
}));
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
// derive the file name and extension
var srcFile = srcKey.match(/(.+)\.([^.]+)/);
var srcName = srcFile[1];
var scrExt = srcFile[2];
// set the destination bucket
var dstBucket = srcBucket + DST_BUCKET_POSTFIX;
// make sure that source and destination are different buckets.
if (srcBucket === dstBucket) {
console.error("Destination bucket must be different from source bucket.");
return;
}
if (!scrExt) {
console.error('unable to derive file type extension from file key ' + srcKey);
return;
}
if (scrExt != "jpg" && scrExt != "png") {
console.log('skipping non-supported file type ' + srcKey + ' (must be jpg or png)');
return;
}
function processImage(data, options, callback) {
gm(data.Body).size(function (err, size) {
var scalingFactor = Math.min(
options.MAX_WIDTH / size.width,
options.MAX_HEIGHT / size.height
);
var width = scalingFactor * size.width;
var height = scalingFactor * size.height;
this.resize(width, height)
.quality(options.SIZING_QUALITY || 75)
.interlace(options.INTERLACE || 'None')
.toBuffer(scrExt, function (err, buffer) {
if (err) {
callback(err);
} else {
uploadImage(data.ContentType, buffer, options, callback);
uploadRecentImage(data.ContentType, buffer, options, callback);
}
});
});
}
function uploadImage(contentType, data, options, callback) {
// Upload the transformed image to the destination S3 bucket.
s3.putObject({
Bucket: dstBucket,
Key: options.MAX_WIDTH + '/' + srcName + '.' + scrExt,
Body: data,
ContentType: contentType
},
callback);
}
function uploadRecentImage(contentType, data, options, callback) {
if(options.MAX_WIDTH == 500){
s3.putObject({
Bucket: dstBucket,
Key: 'recent_optimized.' + scrExt,
Body: data,
ContentType: contentType
},
callback);
}
if(options.MAX_WIDTH == 100){
s3.putObject({
Bucket: dstBucket,
Key: 'recent_thumb.' + scrExt,
Body: data,
ContentType: contentType
},
callback);
}
}
// Download the image from S3 and process for each requested image variant.
async.waterfall(
[
function download(next) {
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},
function processImages(data, next) {
async.each(imgVariants, function (variant, next) {
processImage(data, variant, next);
}, next);
}
],
function (err) {
if (err) {
console.error(
'Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + dstBucket +
' due to an error: ' + err
);
} else {
console.log(
'Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket
);
}
context.done();
}
);
};
You can limit the number of parallel processImages calls:
Replace async.each(imgVariants,
with async.eachLimit(imgVariants, 2,
to not process more than two images in parallel.
The script has a bug:
uploadImage(data.ContentType, buffer, options, callback);
uploadRecentImage(data.ContentType, buffer, options, callback);
This will call callback twice which is not allowed. Only call the callback once!
The script has another bug: event.Records[0] it will only process the first image. If you upload multiple images at the same time this will miss some images.

Read content of SP.File object as text using JSOM

as the title suggests, I am trying to read the contents of a simple text file using JSOM. I am using a Sharepoint-hosted addin for this, the file I am trying to read resides on the host web in a document library.
Here's my JS code:
function printAllListNamesFromHostWeb() {
context = new SP.ClientContext(appweburl);
factory = new SP.ProxyWebRequestExecutorFactory(appweburl);
context.set_webRequestExecutorFactory(factory);
appContextSite = new SP.AppContextSite(context, hostweburl);
this.web = appContextSite.get_web();
documentslist = this.web.get_lists().getByTitle('Documents');
var camlQuery = new SP.CamlQuery();
camlQuery.set_viewXml('<View><ViewFields><FieldRef Name="Name"/></ViewFields></View>');
listitems = documentslist.getItems(camlQuery);
context.load(listitems, 'Include(File,FileRef)');
context.executeQueryAsync(
Function.createDelegate(this, successHandler),
Function.createDelegate(this, errorHandler)
);
function successHandler() {
var enumerator = listitems.getEnumerator();
while (enumerator.moveNext()) {
var results = enumerator.get_current();
var file = results.get_file();
//Don't know how to get this to work...
var fr = new FileReader();
fr.readAsText(file.get);
}
}
function errorHandler(sender, args) {
console.log('Could not complete cross-domain call: ' + args.get_message());
}
}
However, in my succes callback function, I don't know how I can extract the contents of the SP.File object. I tried using the FileReader object from HTML5 API but I couldn't figure out how to convert the SP.File object to a blob.
Can anybody give me a push here?
Once file url is determined file content could be loaded from the server using a regular HTTP GET request (e.g. using jQuery.get() function)
Example
The example demonstrates how to retrieve the list of files in library and then download files content
loadItems("Documents",
function(items) {
var promises = $.map(items.get_data(),function(item){
return getFileContent(item.get_item('FileRef'));
});
$.when.apply($, promises)
.then(function(content) {
console.log("Done");
//print files content
$.each(arguments, function (idx, args) {
console.log(args[0])
});
},function(e) {
console.log("Failed");
});
},
function(sender,args){
console.log(args.get_message());
}
);
where
function loadItems(listTitle,success,error){
var ctx = SP.ClientContext.get_current();
var web = ctx.get_web();
var list = web.get_lists().getByTitle(listTitle);
var items = list.getItems(createAllFilesQuery());
ctx.load(items, 'Include(File,FileRef)');
ctx.executeQueryAsync(
function() {
success(items);
},
error);
}
function createAllFilesQuery(){
var qry = new SP.CamlQuery();
qry.set_viewXml('<View Scope="RecursiveAll"><Query><Where><Eq><FieldRef Name="FSObjType" /><Value Type="Integer">0</Value></Eq></Where></Query></View>');
return qry;
}
function getFileContent(fileUrl){
return $.ajax({
url: fileUrl,
type: "GET"
});
}

Apache Cordova : Office 365 API SharePointClient | How To download one drive files?

I am able to list all the files that i have for my one drive account but need to know how I could download them to my Android or Windows mobile phone.
I tried using the weburl that I receive from the item but looks like it will need the token.
Current JavaScript code :
var AuthenticationContext = new O365Auth.Context();
var discoveryContext = new O365Discovery.Context(); // new DiscoveryServices.Context(new AuthenticationContext(authUrl), appId, redirectUrl);
discoveryContext.services(AuthenticationContext.getAccessTokenFn('Microsoft.SharePoint')).then(function (capabilities) {
capabilities.forEach(function (v) {
if (v.capability === 'MyFiles') {
var msg = "";
var sharePointnew = new Microsoft.CoreServices.SharePointClient(v.resourceId + '/_api/v1.0/me/',
AuthenticationContext.getAccessTokenFn(v.resourceId));
Microsoft.FileServices.FileFetcher
var elementInfo = document.getElementById('popup_body');
document.getElementById('popup_header').innerHTML = "My One Drive Files";
/* This should open a popup */
document.getElementById('popup_div').style.display = "block";
document.getElementById('popup_overlay').style.display = "block";
elementInfo.innerHTML += 'One Drive Files:';
var fileName = 'demo.txt';
var store = cordova.file.dataDirectory;
var fileTransfer = new FileTransfer();
console.log("About to start transfer");
sharePointnew.files.getItems().fetch().then(function (result) {
msg = '';
result.currentPage.forEach(function (item) {
elementInfo.innerHTML += "<br />" + item.name + "<br />";
fileTransfer.download(item.webUrl, store + fileName,
function (entry) {
console.log("Success!");
},
function (err) {
console.log("Error");
console.dir(err);
});
msg += item._odataType + ' "' + item.name + '"\n';
var s = "";
});
console.log('All file system items: \n' + msg);
}, function (error) {
console.error(error);
});
}
});