Zappa deployment successful without url - django

I am trying to deploy a django application through zappa. When I deploy the application everything works correctly and I also get the message "Your updated Zappa deployment is live". But I can't seem to find the url to access the live application.

You can access the url by typing zappa status <stage>, where <stage> is probably something like dev. See also https://github.com/Miserlou/Zappa#status for details.
The printout will have your url as well as other status details of your lambda function.
This works provided you give full permission access to the IAM user (for testing purpose only) though.

I had to configure my zappa_settigs.json by adding these lines:
"apigateway_enabled": true,
"manage_roles": true,
"cors": true,
**so the final zappa settings:**
{
"dev": {
"django_settings": "zappatest.settings",
"apigateway_enabled": true,
"manage_roles": true,
"role_arn": "Role_name",
"role_arn": "arn_name",
"profile_name": "default",
"project_name": "project_name",
"runtime": "python3.8",
"s3_bucket": "bucket_name",
"aws_region": "region_name",
"cors": true,
}
}

Related

How to log library logs to AWS CloudWatch using Serilog?

Currently, I am working on ASP.NET 6 Web API and as a logger We use Serilog to log all the necessary details to cloudwatch and it's working fine. Now I need to add library logs such as AWS errors to cloudwatch. Currently there is an option for that in config file but it only saves logs as a file which results a No space left on device : '/app/Logs/serilog-aws-errors.txt' and the details in the file didn't appear on cloudwatch logs.
This is the appsettings data I use,
"Serilog": {
"Using": [ "AWS.Logger.SeriLog", "Serilog.Sinks.Console", "Serilog.Sinks.File" ],
"MinimumLevel": "Debug",
"WriteTo": [
{ "Name": "AWSSeriLog" },
{ "Name": "Console" },
{
"Name": "File",
"Args": {
"path": "Logs/webapi-.txt",
"rollingInterval": "Day"
}
}
],
"Region": "eu-west-2",
"LogGroup": "/development/serilog",
"LibraryLogFileName": "Logs/serilog-aws-errors.txt"
}
I need to know that there is a way to log the details in serilog-aws-errors.txt to AWS cloudwatch or S3 bucket.
This depends a lot on where in AWS you are trying to deploy your service. In ECS or Fargate you can log directly to the console. This would be a snippet of the container Definition:
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/dev/ecs/my-api-logs-here",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
With the configuration above you only need the Serilog.Sinks.Console and everything will log without a special AWS sink. To write to the console you can just use
loggerConfiguration.WriteTo.Async(a =>
{
a.Console(new JsonFormatter());
});
When deployed to Fargate or ECS, these console logs will appear in your CloudWatch logs. No additional sink is necessary. Lambda logs have a similar setup. See: https://docs.aws.amazon.com/lambda/latest/dg/csharp-logging.html for more details.
If you want to use the Serilog.Sinks.AwsCloudWatch, it does have some nice features, but the setup is a little different. You probably won't want to log to the console or your file sink at all. Instead, you'll just log directly to CloudWatch. You'll want to set it up according to their instructions on Github: https://github.com/Cimpress-MCP/serilog-sinks-awscloudwatch. You can get this up and running in your local environment and then set your app settings up in a way that this only runs when deployed to AWS, and locally you still use the console or file settings.
var options = new CloudWatchSinkOptions
{
// the name of the CloudWatch Log group for logging
LogGroupName = logGroupName,
// the main formatter of the log event
TextFormatter = formatter,
// other defaults defaults
MinimumLogEventLevel = LogEventLevel.Information,
BatchSizeLimit = 100,
QueueSizeLimit = 10000,
Period = TimeSpan.FromSeconds(10),
CreateLogGroup = true,
LogStreamNameProvider = new DefaultLogStreamProvider(),
RetryAttempts = 5
};
// setup AWS CloudWatch client
var client = new AmazonCloudWatchLogsClient(myAwsRegion);
// Attach the sink to the logger configuration
Log.Logger = new LoggerConfiguration()
.WriteTo.AmazonCloudWatch(options, client)
.CreateLogger();

Cannot use AWS SSO credentials with CDK

Since PR: https://github.com/aws/aws-cdk/pull/19454 and release v2.18.0 CDK is supposed to support SSO credentials via the AWS CLI v2 profiles.
However no matter what I do I simply cannot get this to work.
I have created a request for updated documentation in the AWS CDK Issues section, since no official documentation explains how this is supposed to function in practice, and the official documentation still saying it is not supported and to use yawsso: https://github.com/aws/aws-cdk/issues/21314
From going through 4 years of old threads to now I have attempted the following settings with 0 success.
My .aws/config file (sensitive values redacted):
[profile DEV-NN-HSMX]
sso_start_url = https://my-company-url.awsapps.com/start#/
sso_region = eu-central-1
sso_account_name = MY-ACCOUNT
sso_account_id = MY-ACCOUNT-ID
sso_role_name = AdministratorAccess
region = eu-central-1
Running aws sso login --profile "DEV-NN-HSMX" redirects me as expected and I can authenticate with my SSO provider.
Running aws sts get-caller-identity --profile "DEV-NN-HSMX" works as expected and confirms my SSO identity.
Running aws s3 ls --profile "DEV-NN-HSMX" works as expected and shows that the credentials have access.
When attempting to run any CDK commands however. I simply cannot make it work.
AWS CLI version: 2.7.16
AWS CDK version: 2.33.0
I have attempted a combination of all the following, either separately, mixed in all combinations and all at once.
cdk deploy --profile "DEV-NN-HSMX"
Exporting both the $AWS_PROFILE and/or the $CDK_DEFAULT_PROFILE environment variables:
cdk doctor
ℹ️ CDK Version: 2.33.0 (build 859272d)
ℹ️ AWS environment variables:
- AWS_CA_BUNDLE = /home/vscode/certs/cacert.pem
- AWS_PROFILE = DEV-NN-HSMX
- AWS_REGION = eu-central-1
- AWS_STS_REGIONAL_ENDPOINTS = regional
- AWS_NODEJS_CONNECTION_REUSE_ENABLED = 1
- AWS_SDK_LOAD_CONFIG = 1
ℹ️ CDK environment variables:
- CDK_DEFAULT_PROFILE = DEV-NN-HSMX
- CDK_DEFAULT_REGION = eu-central-1
I have tried with a deleted .aws/credentials file as well as one that is just empty.
I have deleted everything in aws\sso\cache and in .aws\cli\cache to make sure no expired credential information remained and then re-authenticated with aws sso login --profile "DEV-NN-HSMX".
If I use yawsso --profiles DEV-NN-HSMX and get temporary credentials into .aws/credentials for my profile, it works fine.
I have been able to bootstrap and deploy without issues using the credential conversion. Proving that from a connection, access rights and bootstrap standpoint everything works as expected.
When using any of the SSO methods as explained above without exporting credentials, I always get the following error message.
cdk deploy --profile "DEV-NN-HSMX"
✨ Synthesis time: 4.18s
Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
Running the command with full verbosity gives this output:
cdk deploy --trace --verbose --profile "DEV-NN-HSMX"
CDK toolkit version: 2.33.0 (build 859272d)
Command line arguments: {
_: [ 'deploy' ],
trace: true,
verbose: 1,
v: 1,
profile: 'DEV-NN-HSMX',
defaultProfile: 'DEV-NN-HSMX',
defaultRegion: 'eu-central-1',
lookups: true,
'ignore-errors': false,
ignoreErrors: false,
json: false,
j: false,
debug: false,
ec2creds: undefined,
i: undefined,
'version-reporting': undefined,
versionReporting: undefined,
'path-metadata': true,
pathMetadata: true,
'asset-metadata': true,
assetMetadata: true,
'role-arn': undefined,
r: undefined,
roleArn: undefined,
staging: true,
'no-color': false,
noColor: false,
ci: false,
all: false,
'build-exclude': [],
E: [],
buildExclude: [],
execute: true,
force: false,
f: false,
parameters: [ {} ],
'previous-parameters': true,
previousParameters: true,
logs: true,
'$0': '/home/vscode/.local/state/fnm_multishells/216_1658735050827/bin/cdk'
}
cdk.json: {
"app": "npx ts-node --prefer-ts-exts bin/cdk-demo.ts",
"watch": {
"include": [
"**"
],
"exclude": [
"README.md",
"cdk*.json",
"**/*.d.ts",
"**/*.js",
"tsconfig.json",
"package*.json",
"yarn.lock",
"node_modules",
"test"
]
},
"context": {
"#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true,
"#aws-cdk/core:stackRelativeExports": true,
"#aws-cdk/aws-rds:lowercaseDbIdentifier": true,
"#aws-cdk/aws-lambda:recognizeVersionProps": true,
"#aws-cdk/aws-lambda:recognizeLayerVersion": true,
"#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": true,
"#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
"#aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
"#aws-cdk/core:checkSecretUsage": true,
"#aws-cdk/aws-iam:minimizePolicies": true,
"#aws-cdk/core:validateSnapshotRemovalPolicy": true,
"#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
"#aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
"#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
"#aws-cdk/core:target-partitions": [
"aws",
"aws-cn"
]
}
}
merged settings: {
versionReporting: true,
pathMetadata: true,
output: 'cdk.out',
app: 'npx ts-node --prefer-ts-exts bin/cdk-demo.ts',
watch: {
include: [ '**' ],
exclude: [
'README.md',
'cdk*.json',
'**/*.d.ts',
'**/*.js',
'tsconfig.json',
'package*.json',
'yarn.lock',
'node_modules',
'test'
]
},
context: {
'#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId': true,
'#aws-cdk/core:stackRelativeExports': true,
'#aws-cdk/aws-rds:lowercaseDbIdentifier': true,
'#aws-cdk/aws-lambda:recognizeVersionProps': true,
'#aws-cdk/aws-lambda:recognizeLayerVersion': true,
'#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021': true,
'#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver': true,
'#aws-cdk/aws-ec2:uniqueImdsv2TemplateName': true,
'#aws-cdk/core:checkSecretUsage': true,
'#aws-cdk/aws-iam:minimizePolicies': true,
'#aws-cdk/core:validateSnapshotRemovalPolicy': true,
'#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName': true,
'#aws-cdk/aws-s3:createDefaultLoggingPolicy': true,
'#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption': true,
'#aws-cdk/core:target-partitions': [ 'aws', 'aws-cn' ]
},
debug: false,
assetMetadata: true,
profile: 'DEV-NN-HSMX',
toolkitBucket: {},
staging: true,
bundlingStacks: [ '*' ],
lookups: true
}
Using CA bundle path: /home/vscode/certs/cacert.pem
Toolkit stack: CDKToolkit
Setting "CDK_DEFAULT_REGION" environment variable to eu-central-1
Resolving default credentials
Could not refresh notices: Error: unable to get local issuer certificate
Unable to determine the default AWS account: ProcessCredentialsProviderFailure: Profile DEV-NN-HSMX did not include credential process
at ProcessCredentials2.load (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/process_credentials.js:102:11)
at ProcessCredentials2.coalesceRefresh (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:205:12)
at ProcessCredentials2.refresh (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/process_credentials.js:163:10)
at ProcessCredentials2.get2 [as get] (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:122:12)
at resolveNext2 (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/credential_provider_chain.js:125:17)
at /home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials/credential_provider_chain.js:126:13
at /home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:124:23
at /home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-sdk/lib/credentials.js:212:15
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
code: 'ProcessCredentialsProviderFailure',
time: 2022-07-25T15:01:41.645Z
}
context: {
'#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId': true,
'#aws-cdk/core:stackRelativeExports': true,
'#aws-cdk/aws-rds:lowercaseDbIdentifier': true,
'#aws-cdk/aws-lambda:recognizeVersionProps': true,
'#aws-cdk/aws-lambda:recognizeLayerVersion': true,
'#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021': true,
'#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver': true,
'#aws-cdk/aws-ec2:uniqueImdsv2TemplateName': true,
'#aws-cdk/core:checkSecretUsage': true,
'#aws-cdk/aws-iam:minimizePolicies': true,
'#aws-cdk/core:validateSnapshotRemovalPolicy': true,
'#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName': true,
'#aws-cdk/aws-s3:createDefaultLoggingPolicy': true,
'#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption': true,
'#aws-cdk/core:target-partitions': [ 'aws', 'aws-cn' ],
'aws:cdk:enable-path-metadata': true,
'aws:cdk:enable-asset-metadata': true,
'aws:cdk:version-reporting': true,
'aws:cdk:bundling-stacks': [ '*' ]
}
outdir: cdk.out
env: {
CDK_DEFAULT_REGION: 'eu-central-1',
CDK_CONTEXT_JSON: '{"#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId":true,"#aws-cdk/core:stackRelativeExports":true,"#aws-cdk/aws-rds:lowercaseDbIdentifier":true,"#aws-cdk/aws-lambda:recognizeVersionProps":true,"#aws-cdk/aws-lambda:recognizeLayerVersion":true,"#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021":true,"#aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver":true,"#aws-cdk/aws-ec2:uniqueImdsv2TemplateName":true,"#aws-cdk/core:checkSecretUsage":true,"#aws-cdk/aws-iam:minimizePolicies":true,"#aws-cdk/core:validateSnapshotRemovalPolicy":true,"#aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName":true,"#aws-cdk/aws-s3:createDefaultLoggingPolicy":true,"#aws-cdk/aws-sns-subscriptions:restrictSqsDescryption":true,"#aws-cdk/core:target-partitions":["aws","aws-cn"],"aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '20.0.0',
CDK_CLI_VERSION: '2.33.0'
}
✨ Synthesis time: 4.54s
Reading existing template for stack CdkDemoStack.
Reading cached notices from /home/vscode/.cdk/cache/notices.json
Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
Error: Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
at SdkProvider.resolveEnvironment (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-cdk/lib/api/aws-auth/sdk-provider.ts:238:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at CloudFormationDeployments.prepareSdkFor (/home/vscode/.local/share/fnm/node-versions/v16.16.0/installation/lib/node_modules/aws-cdk/lib/api/cloudformation-deployments.ts:432:33)
I do notice the ProcessCredentialsProviderFailure in the output, but this is not very informative on how to solve it.
Anyone have any ideas or input?
It seems like agnostic stacks where you do not put the environment information directly into the stack code does not work with the new SSO integration.
Adding the environment information into the stack code makes it work:
const app = new cdk.App();
new CdkDemoStack(app, 'CdkDemoStack', {
env: { account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION
},
});

Create datasource for Google BigQuery Plugin using the Grafana API

Issue:
I would like to steer clear of using the traditional.
authenticationType: jwt
clientEmail: <Service Account Email>
defaultProject: <Default Project Name>
tokenUri: https://oauth2.googleapis.com/token
And use a service account json file from GCP. Is there anyway of doing this?
Environment:
OpenShift running in GCP. ServiceAccount key is mounted.
So if understand your comments correctly, you want to create a BigQuery data source using the Grafana API.
This is the JSON body to send with your request:
{
"orgId": YOUR_ORG_ID,
"name": NAME_YOU_WANT_TO_GIVE,
"type": "doitintl-bigquery-datasource",
"access": "proxy",
"isDefault": true,
"version": 1,
"readOnly": false,
"jsonData": {
"authenticationType": "jwt",
"clientEmail": EMAIL_OF_YOUR_SERVICE_ACCOUNT,
"defaultProject": YOUR_PROJECT_ID,
"tokenUri": "https://oauth2.googleapis.com/token"
},
"secureJsonData": {
"privateKey": YOUR_SERVICE_ACCOUNT_JSON_KEY_FILE
}
}
So there is no way to avoid the code snippet you wanted to "steer clear of", however there is no need to take the JSON key file apart, just provide it to privateKey. You only have to provide the service account email additionally to clientEmail and the project id to defaultProject. Otherwise not different than using the UI.

Access Django admin from Firebase

I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.

loopback How to use ACL REST API? And where does it use?

How to use this API? I cannot find any doc.
https://docs.strongloop.com/display/public/LB/ACL+REST+API
I create user, I create role.
I have ACL in model.json But API does not return anything.
I also found this link but not really helpful tho.
https://docs.strongloop.com/display/public/LB/Using+built-in+models#Usingbuilt-inmodels-Usermodel
this may help:
By default, the ACL REST API is not exposed. To expose it, add the following >to models.json:
"acl": {
"public": true,
"options": {
"base": "ACL"
},
"dataSource": "db"
},