loopback How to use ACL REST API? And where does it use? - loopbackjs

How to use this API? I cannot find any doc.
https://docs.strongloop.com/display/public/LB/ACL+REST+API
I create user, I create role.
I have ACL in model.json But API does not return anything.
I also found this link but not really helpful tho.
https://docs.strongloop.com/display/public/LB/Using+built-in+models#Usingbuilt-inmodels-Usermodel

this may help:
By default, the ACL REST API is not exposed. To expose it, add the following >to models.json:
"acl": {
"public": true,
"options": {
"base": "ACL"
},
"dataSource": "db"
},

Related

Create datasource for Google BigQuery Plugin using the Grafana API

Issue:
I would like to steer clear of using the traditional.
authenticationType: jwt
clientEmail: <Service Account Email>
defaultProject: <Default Project Name>
tokenUri: https://oauth2.googleapis.com/token
And use a service account json file from GCP. Is there anyway of doing this?
Environment:
OpenShift running in GCP. ServiceAccount key is mounted.
So if understand your comments correctly, you want to create a BigQuery data source using the Grafana API.
This is the JSON body to send with your request:
{
"orgId": YOUR_ORG_ID,
"name": NAME_YOU_WANT_TO_GIVE,
"type": "doitintl-bigquery-datasource",
"access": "proxy",
"isDefault": true,
"version": 1,
"readOnly": false,
"jsonData": {
"authenticationType": "jwt",
"clientEmail": EMAIL_OF_YOUR_SERVICE_ACCOUNT,
"defaultProject": YOUR_PROJECT_ID,
"tokenUri": "https://oauth2.googleapis.com/token"
},
"secureJsonData": {
"privateKey": YOUR_SERVICE_ACCOUNT_JSON_KEY_FILE
}
}
So there is no way to avoid the code snippet you wanted to "steer clear of", however there is no need to take the JSON key file apart, just provide it to privateKey. You only have to provide the service account email additionally to clientEmail and the project id to defaultProject. Otherwise not different than using the UI.

How to update a single user's groups with WSO2 SCIM REST API without using patch/ Groups as it results in timeout when the user count is high?

We are using WSO2 SCIM apis to define roles to user and update it.
For role update operation , we are currently adding the new user role(add user to new role group using SCIM api) , and then delete the existing user role (call users SCIM GET request under a GROUP, delete the existing user from the list and use the newly created list as body arguments to call SCIM PATCH request for the GROUP). With this approach , we were able to update roles. But as the user base increased , the above approach of PATCH operation is getting timeout error .(The new role gets updated to user, but the existing role persists as the 2nd api is getting failed).
Below is one solution which i tried out :
Add new role, delete the newly created role inside user details and call PATCH api with the updated roles of user. But then realized on further investigation that roles inside user is readonly and can't be updated using patch/put operations. So i failed in getting a proper solution .
Is there a way to update a single user's role inside the GROUP without using PATCH /Groups endpoint ?
As I have mentioned in the answer https://stackoverflow.com/a/64225419/10055162, the SCIM specification doesn't allow to update the user's group attribute using PATCH /Users/{userId}.
Also, PATCH /Groups/{groupId} may cause performance issues when the group's member count is too high.
WSO2 IS has improved the performance of PATCH /Groups/{groupId} to some extent.
https://github.com/wso2/product-is/issues/6918 - available 5.10.0 onwards
https://github.com/wso2/product-is/issues/9120 - available 5.11.0 onwards
So, if you are using an older version of IS, can you please try with the latest GA release(5.11.0). It may improve the performance.
UPDATED:
You can use SCIM POST /Bulk endpoint to update user's groups by single REST call, instead of having multiple PATCH /Groups/{group-id} calls.
Refer to https://anuradha-15.medium.com/scim-2-0-bulk-operation-support-in-wso2-identity-server-5-10-0-8041577a4fe3 for more details on Bulk endpoint.
example:
To assign two groups (Group1 and Group2) to a user, execute POST https://<host>:<port>/scim2/Bulk with payload similar to the following.
{
"Operations": [
{
"data": {
"Operations": [
{
"op": "add",
"value": {
"members": [
{
"display": "anuradha",
"value": "db15b161-a205-454d-9da1-4a2a0df0585e"
}
]
}
}
]
},
"method": "PATCH",
"path": "/Groups/f707b6cc-91f8-4b8a-97fb-a01c2a79515c"
},
{
"data": {
"Operations": [
{
"op": "add",
"value": {
"members": [
{
"display": "anuradha",
"value": "db15b161-a205-454d-9da1-4a2a0df0585e"
}
]
}
}
]
},
"method": "PATCH",
"path": "/Groups/8c91215f-1b7a-4cdb-87d9-ae29c60d70de"
}
],
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:BulkRequest"
]
}

Access Django admin from Firebase

I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.

Store loopback User in mysql

I want to reuse the loppback's User login and token logic for my app, and to store the info in mySql.
When I leave User's datasource as default (in memory db), it works fine. Explorer is there.
Now I just want to change the datasource for User, and I edit model-config.json to use my db connector:
...
"User": {
"dataSource": "db"
},
"AccessToken": {
"dataSource": "db",
"public": false
},
"ACL": {
"dataSource": "db",
"public": false
...
After I restart the server and play a bit around, it objects that some tables are not in the db:
{ Error: ER_NO_SUCH_TABLE: Table 'mydb.ACL' doesn't exist
Obviously there is no table structure to store users, acls and other stuff in mySql.
How do I get this scheme structure in my db?
Is there a script or command?
I found it myself, it's pretty easy:
https://docs.strongloop.com/display/public/LB/Creating+database+tables+for+built-in+models
Leaving the question if somebody else needs it...

uploading files to amazon web server

I am trying to upload files using amazon web services, but I am getting this error as shown below, because of which the files are not being uploaded to the server:
{
"data": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"]</Message><RequestId>7A20103396D365B2</RequestId><HostId>xxxxxxxxxxxxxxxxxxxxx</HostId></Error>",
"status": 403,
"config": {
"method": "POST",
"transformRequest": [null],
"transformResponse": [null],
"url": "https://giblib-verification.s3.amazonaws.com/",
"data": {
"key": "#usc.edu/1466552912155.jpg",
"AWSAccessKeyId": "xxxxxxxxxxxxxxxxx",
"acl": "private",
"policy": "xxxxxxxxxxxxxxxxxxxxxxx",
"signature": "xxxxxxxxxxxxxxxxxxxx",
"Content-Type": "image/jpeg",
"file": "file:///storage/emulated/0/Android/data/com.ionicframework.giblibion719511/cache/1466552912155.jpg"
},
"_isDigested": true,
"_chunkSize": null,
"headers": {
"Accept": "application/json, text/plain, ​*/*​"
},
"_deferred": {
"promise": {}
}
},
"statusText": "Forbidden"
}
Can anyone tell me what is the reason for the forbidden 403 response? Thanks in advance
You need to provide more details. Which client are you using? From the looks of it, there is a policy that explicitly denies this upload.
It looks like you're user does not have proper permissions for that specific S3 bucket. Use AWS console our IAM to assign proper permissions, including write.
More importantly immediately invalidate the key/secret pair, and rename the bucket. Never share actual keys our passwords on open sites. Someone is likely already using your credentials as you read this.
Read the error: Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"].
Your policy document imposes a restriction on the upload that you are not meeting, and S3 is essentially denying the upload because you told it to.
There is no reason to include this condition in your signed policy document. According to the documentation, this means you are expecting a form field called filename that must not be empty. But there's no such form field. Remove this from your policy and the upload should work.