Parse-Server Cloud Code Query Doesn't Return All Columns - amazon-web-services

I have setup Parse-Server on AWS Elastic Beanstalk by following this guide. I've then written a cloud-code function which fetches a single record from a specific class/collection. The collection contains about 20 columns. However, the object fetched as a result of the query contains only about 8 columns. I've made sure the record does have data in the columns which are missed by the query. Am I missing something here or is it some limitation in Parse? Is there any way to force Parse to fetch these columns?
Parse.Cloud.define('confirmAppointment', function(request, response) {
var staffId = request.params.staffId;
var appointmentId = request.params.appointmentId;
var appointmentRequest = Parse.Object.extend("AppointmentRequest");
appointmentRequest.id = appointmentId;
appointmentRequest.staffId = staffId;
var query = new Parse.Query(appointmentRequest);
query.first({
useMasterKey: true,
success: function(appointment) {
if (appointment) {
// these fields are not found in the fetched appointment object
// they do exist however in mongodb
var requesterUserId = appointment.get("requesterUserId");
var staffUserId = appointment.get("staffUserId");
var staffName = appointment.get("staffNameEn");
...
}
}
...
});
});

There might be some typos in your code (the construction of the query part). Try this instead:
Parse.Cloud.define('confirmAppointment', function(req, res) {
var staffId = req.params.staffId;
var appointmentId = req.params.appointmentId;
var query = new Parse.Query("AppointmentRequest");
query.equalTo('objectId', appointmentId);
query.equalTo('staffId', staffId);
query.first({
useMasterKey: true,
success: function(appointment) {
res.success(appointment.get("requesterUserId"));
},
error: function(err) {
res.error(err);
}
});
});

The issue turned out to be that when i did migration of data from Parse to my mongolab hosted MongoDB instance, I did not click 'Finalize' button in Parse migration wizard. That was intentional, as Parse was warning me that clicking Finalize would make the migration permanent and I would no longer be able to get back to the Parse managed database. On the other hand, I could see that all the data was successfuly migrated to mongolab, and technically it should have been enough to have my AWS hosted parse server work on this new database without any issue. But somehow, clicking "Finalize" button in Parse did some magic (I still dont understand what it could be) and my queries started returning the expected results.
I was able to reproduce the same issue when migrating to Heroku as well, so i was sure it had nothing to do with AWS.
Hope this would help someone.

Related

Whats best way of getting content from SharePoint to AWS/Azure programmatically?

How do we move from sharepoint to AWS estate?
I have found various sources on how to do it in the UI, but nothing programmatically?
Any suggestions would be greatly appreciated
Here are UI steps I've found but nothing programmatically - https://www.youtube.com/watch?v=VW6gqVsvOeQ
You should be able to do this in code using the Graph APIs. In particular, you'll be looking for the Working with files in Microsoft Graph section of the API documentation.
Follow these steps to install the Graph SDK.
Follow these steps to Create an app registration.
Follow these steps to Add a certificate to the app registration.
Get an auth token in your code.
Get the site ID by appending /_api/site/id to the site url e.g. https://contoso.sharepoint.com/sites/TheSite/_api/site/id
Get the list of drives associated with the document libraries on your site.
For each drive, get a list of children.
Iterate each child recursively to expand through folders and sub folders.
Download items.
Upload items to AWS.
Getting an auth token
using Azure.Identity;
var scopes = new[] { "https://graph.microsoft.com/.default" };
// Multi-tenant apps can use "common",
// single-tenant apps must use the tenant ID from the Azure portal
var tenantId = "common";
// Values from app registration
var clientId = "YOUR_APP/CLIENT_ID";
var clientCertificate = new X509Certificate2("MyCertificate.pfx");
var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};
// https://learn.microsoft.com/dotnet/api/azure.identity.clientcertificatecredential
var clientCertCredential = new ClientCertificateCredential(
tenantId, clientId, clientCertificate, options);
var graphClient = new GraphServiceClient(clientCertCredential, scopes);
Get list of drives
var drives = await graphClient.Sites["{site-id}"].Drives
.Request()
.GetAsync();
Get root items of a drive
var children = await graphClient.Drives["{drive-id}"].Root.Children
.Request()
.GetAsync();
Get children of items
var children = await graphClient.Drives["{drive-id}"].Items["{driveItem-id}"].Children
.Request()
.GetAsync();
Download files
var stream = await graphClient.Me.Drive.Items["{driveItem-id}"].Content
.Request()
.GetAsync();

Read data from Google Cloud Datastore by dialogflow agent

I am newbie in the chatbot domain. I need to develop a dialogflow chatbot which can store the data collected from user to Google Cloud Datastore Entities(not Firebase real time database) and retrieve it back when the user want to search.
I can able to write the data collected from user to datastore. But I am struggling in retrieving the data. I am writing the function in the dialogflow inline editor.
Write function :
function order_pizza(agent) {
var pizza_size = agent.parameters.size;
var pizza_topping = agent.parameters.pizza_topping;
var date_time = agent.parameters.size;
const taskKey = datastore.key('order_item');
const entity = {
key: taskKey,
data: {
item_name: 'pizza',
topping: pizza_topping,
date_time: date_time,
order_time: new Date().toLocaleString(),
size: pizza_size }
};
return datastore.save(entity).then(() => {
console.log(`Saved ${entity.key.name}: ${entity.data.item_name}`);
agent.add(`Your order for ${pizza_topping} pizza has been placed!`);
});
}
where "order_item" is the kind(table in datastore) the data is being stored. It is storing the data successfully.
Read data:(the function not working)
function search_pizza(agent){
const taskKey = datastore.key('order_item');
var orderid = agent.parameters.id;
const query = datastore.createQuery('taskKey').filter('ID','=', orderid);
return datastore.runQuery(query).then((result) =>{
agent.add(result[0]);
});
}
This is what i tried so far! Whereever i search I can find the result for firebase realtime database. But can't find solution for google datastore!
Followed many tutorial. But can't quite get it right! Kindly help!

Uploaded media files 404 on production but show up hours later

We've built a custom front end for users to post threads and upload images on top of Sitecore 9. Recently, we moved the user generated content to it's own database, so the media files are no longer in the proper sitecore 'media gallery'. Also, file storage is on a networked share. Uploading images works just fine. It's immediately after where the problem lies.
Upon image upload, our rest api returns the media url from the MediaManager.GetMediaUrl(itemId). This also works. It returns a valid url, it is formatted correctly and should resolve. Unfortunately, for some time after, the url does the sitecore dance an 302s to our 404 page.
I can see the image through the content editor and our custom content handler injects the image folder node into both master and web databases.
Why would the link manager be able to find the url, but the image is not available? I uploaded something yesterday afternoon, an when I checked this morning, the image is now available on the site. Any information or suggestions are appreciated.
I have tried saving the image multiple times hoping that this might trigger whatever mystical Sitecore event that causes images to show. Since there isn't a publish due to the separate database, I can't try that. I've removed all versions thinking it couldn't default to a particular language. Nothing. Only time seems to make the images visible. The code below works just fine. I'm just putting it here to to show some work.
public MediaItem UploadSimpleMedia(MediaGallerySimpleUploadRequest request)
{
try
{
var destinationFolder = request.ParentItemId != null
? _content.GetItem<Item>(request.ParentItemId.Value)
: _content.GetItem<Item>(_publicLibraryPath + "/embeded");
var name = !string.IsNullOrEmpty(request.Name)
? ItemUtil.ProposeValidItemName(request.Name)
: ItemUtil.ProposeValidItemName(request.Files[0].FileName);
var creator = new MediaCreator();
var tags = request.Tags != null ? request.Tags.Split(',') : new string[0];
var tagIds = _tagService.GetTagIds(tags);
var tagIdString = string.Join(",", tagIds);
var options = new MediaCreatorOptions()
{
AlternateText = request.Name,
FileBased = true,
IncludeExtensionInItemName = false,
Versioned = false,
Destination = $"{destinationFolder.Paths.Path}/{name}",
Database = Factory.GetDatabase("content")
};
using (new SecurityDisabler())
using (new DatabaseCacheDisabler())
{
MediaItem item = creator.CreateFromStream(request.Files[0].InputStream, request.Files[0].FileName, options);
item.BeginEdit();
item.InnerItem["Title"] = request.Name;
item.InnerItem["Description"] = request.Description;
item.InnerItem["__Semantics"] = tagIdString;
// Sitecore won't calculate MIME Type or File Size automatically unless you upload directly
// to the Sitecore Media Library. We're uploading in-place for private groups for permission inheritance
// and now because of indexing, they are getting uploaded to /Community/Gallery/Files if not private.
item.InnerItem["Size"] = request.Files[0].ContentLength.ToString();
item.InnerItem["Mime Type"] = request.Files[0].ContentType;
item.EndEdit();
// Just pausing here to reflect on why Sitecore is so sexily complicated
// that it takes time to display an image on the CD, but has no problem giving
// up the media url
CacheManager.GetHtmlCache(Sitecore.Context.Site);
_searchService.RefreshIndex(item.ID.Guid);
var fromDatabase = _content.GetItem<Item>(item.ID.Guid);
return fromDatabase;
}
}
catch (Exception ex)
{
_logger.Error(this.GetType().AssemblyQualifiedName, ex);
_logger.Trace(ex.StackTrace);
}
return null;
}
I don't get any error messages either in our custom logs or the default sitecore logs. Images just aren't resolving. Is it caching? I've even nuked all caches by calling CacheManager.ClearAllCaches().

Validating JSON schema in Postman

When using Postman I validate the JSON response like so:
tv4.addSchema(globalSchema);
const valResult = tv4.validate(data, schema);
// schema is an object, which is a subschema from the larger globalSchema
which works fine, except for the error reporting. The error object I get is missing dataPath and schemaPath, making it hard for my user to find out where the actual problem is. Is there a way to get those properties? (tried validateResult and validateMultiple to no avail)
As an alternative I tried ajv, but as I am in draft-04, it gives me errors. The advice from their site
var ajv = new Ajv({schemaId: 'id'});
// If you want to use both draft-04 and draft-06/07 schemas:
// var ajv = new Ajv({schemaId: 'auto'});
ajv.addMetaSchema(require('ajv/lib/refs/json-schema-draft-04.json'));
does not work because the Postman sandbox does not allow me to require that… any thoughts?
See also: https://community.getpostman.com/t/json-schema-validation-troubles/5024
Here's how I validate schema's with postman to get more detailed errors:
const schema = {
};
var jsonData = JSON.parse(responseBody);
pm.test('Checking Response Against Schema Validation', function() {
var result=tv4.validateMultiple(jsonData, schema);
console.log(result);
pm.expect(result.valid).to.be.true;
});

How to login into a third-party website using google app script and manage data on login?

I am interested in creating a google app script that on run would login into a specific website (third-party) and complete certain functions within the website (pressing buttons/copying text).
After browsing the stackoverflow and other forums I have created a script that allows me to login into my website (source1 source2).
However, I am having difficulties staying logged in and managing the data.
//The current code is just testing if I can get data from within the website.
//The results are displayed in a google app.
function doGet() {
var app = UiApp.createApplication();
app.add(app.createLabel(display_basic_data()));
return app;
}
//logins into website and displays data
function display_basic_data() {
var data;
var url = "http://www.website.bla/users/sign_in";
var payload = {"user[username]":"usr","user[password]":"ps"};
var opt ={"method":"post","payload":payload, "followRedirects" : false};
var response = UrlFetchApp.fetch(url,opt);
data = response;
return data;
}
Currently, the data returned from display_basic_data() is
"<html><body>You are being redirected.</body></html>".
If I try to change my script so that "followRedirects" is true, the data is equivalent to the HTML of the login page.
I understand I have to play around with cookies in order to 'stay' logged in but I have no idea what to do as the examples online provided to be fruitless for me.
Any help would be much appreciated!!!
You may want to do something like this:
var cookie = response.getAllHeaders()['Set-Cookie'];
//maybe parse cookies here, depends on what cookie is
var headers = {'Cookie':cookie};
var opt2 = {"headers":headers};
var pagedata = UrlFetchApp.fetch("http://www.website.bla/home",opt2);