How to set environment variables for complex configuration parameters in AWS lambda using asp.net core 3.1 serverless? - amazon-web-services

In my asp.net core 3.1 web API launchsettings.json I have a environment variable named "AdminstratorConfig:AdminstratorPassword": "myPasswordValue"
Now in my code I also have a class named AppSettings defined like this:
public class AppSettings
{
public AdminstratorConfiguration AdminstratorConfig { get; set; }
}
public class AdminstratorConfiguration
{
public string AdminstratorPassword { get; set; }
}
When running in my local I can bind the environment variable into my AppSettings instance using something like this in the Startup
public class Startup
{
public IConfiguration Configuration { get; }
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public void ConfigureServices(IServiceCollection services)
{
var appSettings = new AppSettings();
Configuration.Bind(appSettings);
// Here appSettings.AdminstratorConfig.AdminstratorPassword contains value 'myPasswordValue'
}
}
I cal also load the same from my appsettings.json if I have my configuration defined as
{
"AdminstratorConfig":
{
"AdminstratorPassword": "myPasswordValue"
}
}
However after deploying my application as AWS serverless lambda I tried to set the same environment variable in Lambda configuration section but it doesn't allow special characters here ' : '
Is there a way we can set and load these complex environment variables in AWS Lambda similar to my local?
if not what are the possible alternate approaches?

You can use __ (double underscore) instead of : (colon), so the environment variable in lambda would be AdministratorConfig__AdministratorPassword as the key and your myPasswordValue as the value.
See the documentation.

Related

Upload file to s3 using api gateway and read query params inside lambda

I've found this example for uploading image to s3 bucket using API Gateway.
At the end it will be stored in the s3 bucket hitting the following endpoint
https://abc.execute-api.ap-southeast-1.amazonaws.com/v1/mybucket/myobject.jpeg
I have lambda function which accepts several parameters
public async Task<string> FunctionHandler(MyRequest request, ILambdaContext context)
{
...
}
public class MyRequest
{
public double Price { get; set; }
public string Name { get; set; }
public Guid Id { get; set; }
}
My question is:
Is it possible to expand file upload using api gateway to accept query params (MyRequest in this case) and to pass that params to lambda function. Idea is to trigger lambda function once the file is uploaded and to read passed params inside lambda function.
Another idea is to store all params as part of the filename, for example:
https://abc.execute-api.ap-southeast-1.amazonaws.com/v1/mybucket/price_20-name_boston-id_123-myobject.jpeg and to parse that filename from the lambda. In this case I would parse that in the lambda.
Or is there another option you would suggest?

How to deploy AWS CDK app multiple times?

Is it possible to deploy a CDK app to the same account multiple times? I want to run synth once and the run cdk deploy against that synthesised template multiple times.
I can see that the recent 1.28.0 release of the CDK allows for passing CloudFormation parameters into the deploy command (via #1237). This means I can parameterize the contents of a stack, but I don't know how to change the name/id of the app itself.
For example, here is a simple app:
public class ExampleApp {
public static void main(final String[] args) {
App app = new App();
new ExampleStack(app, "ExampleStack");
app.synth();
}
}
and here is a simple do-nothing stack:
public class ExampleStack extends Stack {
public ExampleStack(final Construct scope, final String id) {
this(scope, id, null);
}
public ExampleStack(final Construct scope, final String id, final StackProps props) {
super(scope, id, props);
CfnParameter someVar = CfnParameter.Builder.create(this, "SomeVar")
.description("Some variable that can be passed in at deploy-time.")
.type("String")
.build();
// rest of stack here
}
}
I can run cdk synth and output the template somewhere, then run
cdk --app path/to/cdk.out deploy ExampleStack --parameters "ExampleStack:SomeVar=SomeValue"
and the parameter will be passed into the stack at deploy-time.
However, I don't see how to deploy the app multiple times with different names (or ids). Is this possible?
The background to why I want to do this, NOT run synth multiple times, is because for compliance reasons, I need a single artifact - the cdk.out directory - and then deploy that multiple times. To that end, I can't use answers based around multiple runs of synth.
Try this:
You need to make your stack name a parameter passed in from the command line
remove the app paramter from your cdk.json and specify the app parameter on the command line passing in the "prefix" parameter
Any resources in your stack with with service level names or IDs need to also be modified, for example two stacks can't create a secret with the same name
In my solution I derived from the StackProps object to add a "PrefixName" property to the stack and the prefix can be used within my stack to influence the naming or resources.
My program.cs looks as follows:
using Amazon.CDK;
namespace DevconnectListener
{
sealed class Program
{
public static void Main(string[] args)
{
var app = new App();
var props = new DevConnectStackProps()
{
PrefixName = args[0],
StackName = args[0] + "-DevconnectListenerStack"
};
var s = new DevconnectListenerStack(app, "DevconnectListenerStack", props);
app.Synth();
}
}
}
Here is my custom DevConnectStackProps class:
using System.Collections.Generic;
using Amazon.CDK;
using Amazon.CDK.AWS.DynamoDB;
using Amazon.CDK.AWS.Lambda;
namespace DevconnectListener
{
public class DevConnectStackProps : StackProps, IStackProps
{
public DevConnectStackProps()
{
}
public string PrefixName { get; set; }
}
}
My cdk.json look like this, removed the app property:
{
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"#aws-cdk/core:stackRelativeExports": "true"
}
}
Then there is a CMD file named deploy-dev.cmd (which contains the aws profile i want to use and the dev prefix parameter:
cdk deploy --app "dotnet run -p src/DevconnectListener/DevconnectListener.csproj dev" --profile dev_aws_profile
If you run this again with a different prefix, you'll see a new stack in the CloudFormation console.

AWS IAM CDK: tagging and creating access key for a user

I'm trying to use AWS CDK to create a user with minimal permissions through a custom policy, but I'm stuck with tagging that user and creating its access keys.
Below there's my code:
public class Sample extends App {
public static void main(final String[] args) {
App app = new App();
new UserStack(app, "user-stack");
app.run();
}
public static class UserStack extends Stack {
// Not able to add Tag and Create Access Key
public UserStack(final App parent, final String name) {
super(parent, name);
PolicyStatement statement = new PolicyStatement(PolicyStatementEffect.Allow);
statement.addResource("*");
statement.addAction("lambda:UpdateFunctionCode");
User user = new User(this, "LambdaDeployer", UserProps.builder().withUserName("lambda-deployer").withPath("/").build());
user.addToPolicy(statement);
Tag tag = new Tag("Project", "devops");
// how to tag the user ??
new CfnOutput(this, "LambdaDeployerOutputAccessKey", CfnOutputProps.builder().withValue("AWS::IAM::AccessKey").build());
new CfnOutput(this, "LambdaDeployerOutputSecretAccessKey", CfnOutputProps.builder().withValue("AWS::IAM::SecretAccessKey").build());
}
}
}
You'll probably have to use a Custom Resource in order to call the TagUser API, since adding tags to a User is not available natively in CloudFormation.
You can use a new feature in the CDK to help you author your Custom Resource: https://github.com/awslabs/aws-cdk/pull/1850

How to prepare Pub/Sub emulator for tests?

I start the gcloud sdk docker
docker run -ti --rm --expose=8085 -p 8085:8085 google/cloud-sdk:latest
then i run:
gcloud beta emulators pubsub start --project=my-project --host-port=0.0.0.0:8085
then stop the sever and then:
gcloud beta emulators pubsub env-init
gives:
export PUBSUB_EMULATOR_HOST=0.0.0.0:8085
but there is no project id. How can I setup project for tests? How can i create topics and subscriptions?
version:
gcloud version
gives:
Google Cloud SDK 236.0.0
...
pubsub-emulator 2019.02.22
You are launching pubsub emulator with project my-project in your 2nd command. Once this is running, don't kill it, leave it running.
To create the topics and subscriptions, you have to use one of the SDKs. I created a demo project that does this using the Java SDK: https://github.com/nhartner/pubsub-emulator-demo/
The relevant code is this:
#Component
public class TestPubSubConfig {
private final TransportChannelProvider channelProvider;
private final CredentialsProvider credentialsProvider;
private String projectId;
private String topicName = "test-topic";
private String subscriptionName = "test-subscription";
TestPubSubConfig(#Autowired #Value("${spring.cloud.gcp.pubsub.emulator-host}") String emulatorHost,
#Autowired #Value("${spring.cloud.gcp.project-id}") String projectId) throws IOException {
this.projectId = projectId;
ManagedChannel channel = ManagedChannelBuilder.forTarget(emulatorHost).usePlaintext().build();
channelProvider = FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));
credentialsProvider = NoCredentialsProvider.create();
createTopic(topicName);
createSubscription(topicName, subscriptionName);
}
#Bean
public Publisher testPublisher() throws IOException {
return Publisher.newBuilder(ProjectTopicName.of(projectId, topicName))
.setChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider)
.build();
}
private void createSubscription(String topicName, String subscriptionName) throws IOException {
ProjectTopicName topic = ProjectTopicName.of(projectId, topicName);
ProjectSubscriptionName subscription = ProjectSubscriptionName.of(projectId, subscriptionName);
try {
subscriptionAdminClient()
.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 100);
}
catch (AlreadyExistsException e) {
// this is fine, already created
}
}
private void createTopic(String topicName) throws IOException {
ProjectTopicName topic = ProjectTopicName.of(projectId, topicName);
try {
topicAdminClient().createTopic(topic);
}
catch (AlreadyExistsException e) {
// this is fine, already created
}
}
private TopicAdminClient topicAdminClient() throws IOException {
return TopicAdminClient.create(
TopicAdminSettings.newBuilder()
.setTransportChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider).build());
}
private SubscriptionAdminClient subscriptionAdminClient() throws IOException {
return SubscriptionAdminClient.create(SubscriptionAdminSettings.newBuilder()
.setTransportChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider)
.build());
}
}
A possible gotchya we uncovered while working with the Pub/Sub emulator is that the documentation says:
In this case, the project ID can be any valid string; it does not
need to represent a real GCP project because the Cloud Pub/Sub
emulator runs locally.
any valid string in this context is not any string, but specifically a valid one, meaning it looks like a valid GC Project Id. In our testing this was specifically strings that match the REGEX pattern:
/^[a-z]-[a-z]-\d{6}$/
Once supplied with a valid project ID, the emulator works as advertised. If you have a sandbox project in GC you can use that ID or you can make up your own that matches that pattern. Once you got that far you can follow the remainder of the Testing apps locally with the emulator documentation.

Grails AWS SDK plugin not resolving PutObjectRequest

I am trying to get my grails app working with Amazon S3, I have been following the following docs... http://agorapulse.github.io/grails-aws-sdk/guide/single.html
At the following step amazonWebService.s3.putObject(new PutObjectRequest('some-grails-bucket', 'somePath/someKey.jpg', new File('/Users/ben/Desktop/photo.jpg')).withCannedAcl(CannedAccessControlList.PublicRead))
The project can't resolve class PutObjectRequest, and I have tried importing com.amazonaws.services.s3.model.PutObjectRequest manually, but it still cant find the class. The only thing I can think of is I might have an older version of the SDK, though I only followed the tutorial.
My BuildConfig.groovy...
...
dependencies{
//dependencies for amazon aws plugin
build 'org.apache.httpcomponents:httpcore:4.3.2'
build 'org.apache.httpcomponents:httpclient:4.3.2'
runtime 'org.apache.httpcomponents:httpcore:4.3.2'
runtime 'org.apache.httpcomponents:httpclient:4.3.2'
}
plugins{
...
runtime ':aws-sdk:1.9.40'
}
has anyone else run into this issue and have a solution?
I don't use the plugin, I simply just use the SDK directly. Not sure what you would need a plugin for. You don't need httpcomponents for it to work
Add this to you dependencies block:
compile('com.amazonaws:aws-java-sdk-s3:1.10.2') {
exclude group: 'com.fasterxml.jackson.core'
}
Heres my bean I use. I set the key, access, and bucket data in the bean configuration
class AmazonStorageService implements FileStorageService {
String accessKeyId
String secretAccessKey
String bucketName
AmazonS3Client s3client
#PostConstruct
private void init() {
s3client = new AmazonS3Client(new BasicAWSCredentials(accessKeyId, secretAccessKey));
}
String upload(String name, InputStream inputStream) {
s3client.putObject(new PutObjectRequest(bucketName, name, inputStream, null).withCannedAcl(CannedAccessControlList.PublicRead));
getUrl(name)
}
String upload(String name, byte[] data) {
upload(name, new ByteArrayInputStream(data))
}
String getUrl(String name) {
s3client.getUrl(bucketName, name)
}
Boolean exists(String name) {
try {
s3client.getObjectMetadata(bucketName, name)
true
} catch(AmazonServiceException e) {
false
}
}
}