upload file greater than 10MB - loopbackjs

I am trying to upload a file>10MB in a web project
and i get this error
maxFileSize exceeded, 10551296 bytes of zone data received (max is 10485760)
so how can I upload a file greater than 10MB?
help me please

The maxFileSize parameter is configured in the datasources.json file:
"storage": {
"name": "storage",
"connector": "loopback-component-storage",
"allowedContentTypes": ["image/jpg", "image/jpeg", "image/png"],
"provider": "filesystem",
"maxFileSize": "1048576",
"root": "image"
}
}

Related

Looking for REST API (s) in google cloud to pull health, cpu load etc information of servers deployed in kubernetes

i have a SpringBoot application which is showing helath of all the servers in react charts today. we have some applications(servers) deployed to GCP using Kubernetes. i would like to pull and show health of the servers, number of pods, cpu utilization etc in my spring boot application. i have searched all GKE related REST apis in documentation, how ever i found REST urls at https://container.googleapis.com. but, none of them are seems to help me. please help me find the set of REST api's to fetch the above said heath statistics.
You can follow the documentation
You will find all info you need like cpu utilization and other useful metrics
The "metric type" strings in this table must be prefixed with actions.googleapis.com/
Metric type: instance/cpu/utilization:
Fractional utilization of allocated CPU on this instance. Values are typically numbers between 0.0 and 1.0 (but some machine types allow bursting above 1.0). Charts display the values as a percentage between 0% and 100% (or more). This metric is reported by the hypervisor for the VM and can differ from agent.googleapis.com/cpu/utilization, which is reported from inside the VM. Sampled every 60 seconds. After sampling, data is not visible for up to 240 seconds.
instance_name: The name of the VM instance
Creating the GET request
#Raj: This is not the url for the get request, check this tutorial, you want to format your get request the following way (change parameters depending on your own values):
curl -X GET -H "Authorization: Bearer $TOKEN"\
"https://monitoring.googleapis.com/v3/projects/{{YOUR_PROJECT}}/timeSeries/?filter=metric.type+%3D+%22compute.googleapis.com%2Finstance%2Fcpu%2Futilization%22&\
interval.endTime=2017-01-30T21%3A45%3A00.000000Z\
&interval.startTime=2017-01-30T21%3A43%3A00.000000Z"
{
"timeSeries": [
{
"metric": {
"labels": {
"instance_name": "evan-test"
},
"type": "compute.googleapis.com/instance/cpu/utilization"
},
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "743374153023006726",
"zone": "us-east1-d",
"project_id": "evan-testing"
}
},
"metricKind": "GAUGE",
"valueType": "DOUBLE",
"points": [
{
"interval": {
"startTime": "2017-01-30T21:44:01.763Z",
"endTime": "2017-01-30T21:44:01.763Z"
},
"value": {
"doubleValue": 0.00097060417263416339
}
},
{
"interval": {
"startTime": "2017-01-30T21:43:01.763Z",
"endTime": "2017-01-30T21:43:01.763Z"
},
"value": {
"doubleValue": 0.00085122420706227329
}
}
]
},
...
]

HIVE_INVALID_METADATA in Amazon Athena

How can I work around the following error in Amazon Athena?
HIVE_INVALID_METADATA: com.facebook.presto.hive.DataCatalogException: Error: : expected at the position 8 of 'struct<x-amz-request-id:string,action:string,label:string,category:string,when:string>' but '-' is found. (Service: null; Status Code: 0; Error Code: null; Request ID: null)
When looking at position 8 in the database table connected to Athena generated by AWS Glue, I can see that it has a column named attributes with a corresponding struct data type:
struct <
x-amz-request-id:string,
action:string,
label:string,
category:string,
when:string
>
My guess is that the error occurs because the attributes field is not always populated (c.f. the _session.start event below) and does not always contain all fields (e.g. the DocumentHandling event below does not contain the attributes.x-amz-request-id field). What is the appropriate way to address this problem? Can I make a column optional in Glue? Can (should?) Glue fill the struct with empty strings? Other options?
Background: I have the following backend structure:
Amazon PinPoint Analytics collects metrics from my application.
The PinPoint event stream has been configured to forward the events to an Amazon Kinesis Firehose delivery stream.
Kinesis Firehose writes data to S3
Use AWS Glue to crawl S3
Use Athena to write queries based on the databases and tables generated by AWS Glue
I can see PinPoint events successfully being added to json files in S3, e.g.
First event in a file:
{
"event_type": "_session.start",
"event_timestamp": 1524835188519,
"arrival_timestamp": 1524835192884,
"event_version": "3.1",
"application": {
"app_id": "[an app id]",
"cognito_identity_pool_id": "[a pool id]",
"sdk": {
"name": "Mozilla",
"version": "5.0"
}
},
"client": {
"client_id": "[a client id]",
"cognito_id": "[a cognito id]"
},
"device": {
"locale": {
"code": "en_GB",
"country": "GB",
"language": "en"
},
"make": "generic web browser",
"model": "Unknown",
"platform": {
"name": "macos",
"version": "10.12.6"
}
},
"session": {
"session_id": "[a session id]",
"start_timestamp": 1524835188519
},
"attributes": {},
"client_context": {
"custom": {
"legacy_identifier": "50ebf77917c74f9590c0c0abbe5522d2"
}
},
"awsAccountId": "672057540201"
}
Second event in the same file:
{
"event_type": "DocumentHandling",
"event_timestamp": 1524835194932,
"arrival_timestamp": 1524835200692,
"event_version": "3.1",
"application": {
"app_id": "[an app id]",
"cognito_identity_pool_id": "[a pool id]",
"sdk": {
"name": "Mozilla",
"version": "5.0"
}
},
"client": {
"client_id": "[a client id]",
"cognito_id": "[a cognito id]"
},
"device": {
"locale": {
"code": "en_GB",
"country": "GB",
"language": "en"
},
"make": "generic web browser",
"model": "Unknown",
"platform": {
"name": "macos",
"version": "10.12.6"
}
},
"session": {},
"attributes": {
"action": "Button-click",
"label": "FavoriteStar",
"category": "Navigation"
},
"metrics": {
"details": 40.0
},
"client_context": {
"custom": {
"legacy_identifier": "50ebf77917c74f9590c0c0abbe5522d2"
}
},
"awsAccountId": "[aws account id]"
}
Next, AWS Glue has generated a database and a table. Specifically, I see that there is a column named attributes that has the value of
struct <
x-amz-request-id:string,
action:string,
label:string,
category:string,
when:string
>
However, when I attempt to Preview table from Athena, i.e. execute the query
SELECT * FROM "pinpoint-test"."pinpoint_testfirehose" limit 10;
I get the error message described earlier.
Side note, I have tried to remove the attributes field (by editing the database table from Glue), but that results in Internal error when executing the SQL query from Athena.
This is a known limitation. Athena table and database names allow only underscore special characters#
Athena table and database names cannot contain special characters, other than underscore (_).
Source: http://docs.aws.amazon.com/athena/latest/ug/known-limitations.html
Use tick (`) when table name has - in the name
Example:
SELECT * FROM `pinpoint-test`.`pinpoint_testfirehose` limit 10;
Make sure you select "default" database on the left pane.
I believe the problem is your struct element name: x-amz-request-id
The "-" in the name.
I'm currently dealing with a similar issue since my elements in my struct have "::" in the name.
Sample data:
some_key: {
"system::date": date,
"system::nps_rating": 0
}
Glue derived struct Schema (it tried to escape them with ):
struct <
system\:\:date:String
system\:\:nps_rating:Int
>
But that still gives me an error in Athena.
I don't have a good solution for this other than changing Struct to STRING and trying to process the data that way.

Use the name of the table from Amazon RDS in the output csv being sent to S3

I successfully managed to get a data pipeline to transfer data from a set of tables in Amazon RDS (Aurora) to a set of .csv files in S3 with a "copyActivity" connecting the two DataNodes.
However, I'd like the .csv file to have the name of the table (or view) that it came from. I can't quite figure out how to do this. I think the best approach is to use an expression the filePath parameter of the S3 DataNode.
But, I've tried #{table}, #{node.table}, #{parent.table}, and a variety of combinations of node.id and parent.name without success.
Here's a couple of JSON snippets from my pipeline:
"database": {
"ref": "DatabaseId_abc123"
},
"name": "Foo",
"id": "DataNodeId_xyz321",
"type": "MySqlDataNode",
"table": "table_foo",
"selectQuery": "select * from #{table}"
},
{
"schedule": {
"ref": "DefaultSchedule"
},
"filePath": "#{myOutputS3Loc}/#{parent.node.table.help.me.here}.csv",
"name": "S3_BAR_Bucket",
"id": "DataNodeId_w7x8y9",
"type": "S3DataNode"
}
Any advice you can provide would be appreciated.
I see that you have #{table} (did you mean #{myTable}?). If you are using a parameter to pass the name of the DB table, you can use that in the S3 filepath as well like this:
"filePath": "#{myOutputS3Loc}/#{myTable}.csv",

Deploy image to AWS Elastic Beanstalk from private Docker repo

I'm trying to pull Docker image from its private repo and deploy it on AWS Elastic Beanstalk with the help of Dockerrun.aws.json packed in zip. Its content is
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "docker/.dockercfg"
},
"Image": {
"Name": "namespace/repo:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Where "my-bucket" is my bucket's name on s3, which uses the same location as my BS environment. Configuration that's set in key is the result of
$ docker login
invoked in docker2boot app's terminal. Then it's copied to folder "docker" in "my-bucket". The image exists for sure.
After that I upload .zip with dockerrun file to EB and on deploy I get
Activity execution failed, because: WARNING: Invalid auth configuration file
What am I missing?
Thanks in advance
Docker has updated the configuration file path from ~/.dockercfg to ~/.docker/config.json. They also have leveraged this opportunity to do a breaking change to the configuration file format.
AWS however still expects the former format, the one used in ~/.dockercfg (see the file name in their documentation):
{
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
Which is incompatible with the new format used in ~/.docker/config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
}
They are pretty similar though. So if your version of Docker generates the new format, just strip the auths line and its corresponding curly brace and you are good to go.

Aws OpsWorks RDS Configuration for Tomcat context.xml

I am trying to deploy an app named abcd with artifact as abcd.war. I want to configure to an external datasource. Below is my abcd.war/META-INF/context.xml file
<Context>
<ResourceLink global="jdbc/abcdDataSource1" name="jdbc/abcdDataSource1" type="javax.sql.DataSource"/>
<ResourceLink global="jdbc/abcdDataSource2" name="jdbc/abcdDataSource2" type="javax.sql.DataSource"/>
</Context>
I configured the below custom JSON during a deployment
{
"datasources": {
"fa": "jdbc/abcdDataSource1",
"fa": "jdbc/abcdDataSource2"
},
"deploy": {
"fa": {
"database": {
"username": "un",
"password": "pass",
"database": "ds1",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
},
"database": {
"username": "un",
"password": "pass",
"database": "ds2",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
}
}
}
}
I also added the recipe opsworks_java::context during configure phase. But it doesnt seem like working and I always get the message as below
[2014-01-11T16:12:48+00:00] INFO: Processing template[context file for abcd] action create (opsworks_java::context line 16)
[2014-01-11T16:12:48+00:00] DEBUG: Skipping template[context file for abcd] due to only_if ruby block
Can anyone please help on what I am missing with OpsWorks configuration?
You can only configure one datasource using the built-in database.yml. If you want to pass additional information to your environment, please see Passing Data to Applications