aws AmazonDynamoDBSessionManagerForTomcat ClassNotFound issue - amazon-web-services

I have created a simple aws web project using the Eclipse sdk.
I checked on Session Management using dynamo db so that my sessions are not sticky and they can persist if the load balancer adds or removes instances.
This project has a simple Object
package com.ns.ts.dto;
import java.io.Serializable;
public class User implements Serializable{
/**
*
*/
private static final long serialVersionUID = -7038692393544658830L;
private String user;
private String name;
public String getUser() {
return user;
}
public void setUser(String user) {
this.user = user;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
And there are two servlets, setUser (which takes some parameters and sets user in the session with some values)
and getUser (which displays the user values from the session)
I deploy the project and all seems fine. A table gets created in DynamoDB.
Now I call setUser. This sets the object User in the session with some values.
Next, I call getUser and this displays the values of the User object from the session.
(Seems that the session is still in the EC2 instance)
I wait for some time and again call setUser with a different set of parameters.
I am watching the table in DynamoDB to make sure that the session object is in the table (I confirm this by last updated time for the row)
I wait for some time and again call getUser.
This throws an error:
java.lang.ClassNotFoundException:com.ns.ts.dto.User
The error is generated at
com.amazonaws.tomcatsessionmanager.amazonaws.services.dynamodb.sessionmanager.DynamoDBSessionStore.load
I think this error is generates when the jar tries to deserilize the session object from dynampDB and it cant find the classpath for my custom object in the shared lib of Tomcat.
Is there anybody who has faced this before ?
Is there a work around / solution for using custom objects in a session and store them the dynamoDB ?

If the jar containing your User class is located under the WEB-INF/lib directory of your webapp, try moving it under Tomcat's lib directory. It probably needs to be alongside the Amazon jar for the class loading to work correctly.

There is an update and this problem seems to be have resolved.
https://github.com/aws/aws-dynamodb-session-tomcat/issues/3

Related

How to get list of public urls for google cloud storage with NodeJs client instead of storage object

When running (new Storage()).bucket('my-bucket-name').getFiles(), I get a list of objects with this structure. All the items are set to public and I'd rather not process the objects to piece together the public urls by "hand" (https://storage.cloud.google.com/[object.metadata.bucket]/[object.metadata.name]) and I was wondering if the NodeJs client for GCP offers anything like this.
I found a similar link here except for python.
Thank you!
As mentioned in the thread you posted, there is no direct way to do this through the client libraries that Google has in place. There are some objects that allow you to get the URL directly, but not all of them do.
Due to this, It's safer for you to piece the URLs inside your code. As you mention, and as referred in the Google Docs through this document, you can use the URL pattern http(s)://storage.googleapis.com/[bucket]/[object] in order to quickly construct the URL.
Given the response of the API, you can create it through a small cycle such as
function main(bucketName = 'my-bucket') {
// The ID of your GCS bucket
const bucketName = 'your-unique-bucket-name';
// The string for the URL
const url = 'https://storage.googleapis.com/';
// Imports the Google Cloud client library
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
async function listFiles() {
// Lists files in the bucket
const [files] = await storage.bucket(bucketName).getFiles();
console.log('URLs:');
files.forEach(file => {
console.log(url.concat(bucketName,'/',file.name));
});
}
listFiles().catch(console.error);
}
This was adapted from the sample code for listing files over at GCPs GitHub

AWS DynamoDB read newly inserted record

Since i am new to AWS and other AWS services. for my hands on , prepared dynamodb use case. Whenever records insert into Dynamodb, that record should move to S3 for further processing. Written below code snippet in java using KCL
public static void main(String... args) {
KinesisClientLibConfiguration workerConfig = createKCLConfiguration();
StreamsRecordProcessorFactory recordProcessorFactory = new StreamsRecordProcessorFactory();
System.out.println("Creating worker");
Worker worker = createKCLCWorker(workerConfig, recordProcessorFactory);
System.out.println("Starting worker");
worker.run();
}
public class StreamsRecordProcessorFactory implements IRecordProcessorFactory {
public IRecordProcessor createProcessor() {
return new StreamRecordsProcessor();
}
}
method in StreamRecordsProcessor class
private void processRecord(Record record) {
if (record instanceof RecordAdapter) {
com.amazonaws.services.dynamodbv2.model.Record streamRecord = ((RecordAdapter) record)
.getInternalObject();
if ("INSERT".equals(streamRecord.getEventName())) {
Map<String, AttributeValue> attributes
= streamRecord.getDynamodb().getNewImage();
System.out.println(attributes);
System.out.println(
"New item name: " + attributes.get("name").getS());
}
}
}
From my local environment , i can able to see the record whenever we added the records in dynamodb. but i have few questions.
How can i deploy this project into AWS.
What is procedure or any required configuration from AWS side.
Please share your thoughts.
You should be able to use AWS Lambda as the integration point between Kinesis that ingest data from the DynamoDB stream and your Lambda function that reads data from the stream and pushes into a Kinesis Firehose stream to be ultimately deposited in S3. Here is an AWS blog article that can serve as a high-level guide for doing this. It gives you information about the AWS components you can use to build this and additional research on each component can help you put the pieces together.
Give that a try, if you get stuck anywhere, please add a comment and I'll respond in due time.

ef core migration can't use secret manager

When I create .net core web applications, I use the secret manager during testing. I am generally able to create a new web project (mvc and web api), right click on the project and select "manage user secrets". This opens a json file where I add the secrets. I then use this in my startup.cs something like this:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseMySql(Configuration["connectionString"]));
The website works fine with this and connects well to the database. However when I try using ef core migration commands such as add-migration, they don't seem to be able to access the connection string from the secret manager. I get the error saying "connection string can't be null". The error is gone when I hard code Configuration["connectionString"] with the actual string. I have checked online and checked the .csproj file, they already contain the following lines:
<UserSecretsId>My app name</UserSecretsId>
And later:
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.1" />
<DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
Is there anything I need to add so the migrations can access the connection string?
Update
I only have one constructor in the context class:
public ApplicationDBContext(DbContextOptions<ApplicationDBContext> options) : base(options)
{
}
I am currently coming across this exact problem as well. I have come up with a solution that works for now, but one may consider messy at best.
I have created a Configuration Class that provides the Configuration Interface when requested:
public static class Configuration
{
public static IConfiguration GetConfiguration()
{
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
}
}
In the Migration, you can then get the Configuration File and access its UserSecrets like this:
protected override void Up(MigrationBuilder migrationBuilder)
{
var conf = Configuration.GetConfiguration();
var secret = conf["Secret"];
}
I have tested creating a SQL Script with these User Secrets, and it works (you obviously wouldn't want to keep the Script laying around since it would expose the actual secret).
Update
The above config can also be set up into Program.cs class in the BuildWebHost method:
var config = new ConfigurationBuilder().AddUserSecrets<Startup>().Build();
return WebHost.CreateDefaultBuilder(args).UseConfiguration(config)...Build()
Or in the Startup Constructor if using that Convention
Update 2 (explanation)
It turns out this issue is because the migration scripts runs with the environment set to "Production". The secret manager is pre-set to only work in "Development" environment (for a good reason). The .AddUserSecrets<Startup>() function simply adds the secrets for all environment.
To ensure that this isn't set to your production server, there are two solutions I have noticed, one is suggested here: https://learn.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell
Set env:ASPNETCORE_ENVIRONMENT before running to specify the ASP.NET Core environment.
This solution would mean there is no need to set .AddUserSecrets<Startup>() on every project created on the computer in future. However if you happen to be sharing this project across other computers, this needs to be configured on each computer.
The second solution is to set the .AddUserSecrets<Startup>() only on debug build like this:
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
#if DEBUG
.AddUserSecrets<Startup>()
#endif
.AddEnvironmentVariables()
.Build();
Additional Info
The Configuration Interface can be passed to Controllers in their Constructor, i.e.
private readonly IConfiguration _configuration;
public TestController(IConfiguration configuration)
{
_configuration = configuration;
}
Thus, any Secrets and Application Setting are accessible in that Controller by accessing _configuration["secret"].
However, if you want to access Application Secrets from, for example, a Migration-File, which exists outside of the Web Application itself, you need to adhere to the original answer because there's no easy way (that I know of) to access those secrets otherwise (one use case I can think of would be seeding the Database with an Admin and a Master Password).
To use migrations in NetCore with user secrets we can also set a class (SqlContextFactory) to create its own instance of the SqlContext using a specified config builder. This way we do not have to create some kind of workaround in our Program or Startup classes. In the below example SqlContext is an implementation of DbContext/IdentityDbContext.
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;
using Microsoft.Extensions.Configuration;
public class SqlContextFactory : IDesignTimeDbContextFactory<SqlContext>
{
public SqlContext CreateDbContext(string[] args)
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: false)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
var builder = new DbContextOptionsBuilder<SqlContext>();
builder.UseSqlServer(config.GetConnectionString("DefaultConnection"));
return new SqlContext(builder.Options);
}
}
Since I have noticed a lot of people running into this confusion, I am writing a simplified version of this resolution.
The Problem/Confusion
The secret manager in .net core is designed to work only in the Development environment. When running your app, your launchSettings.json file ensures that your ASPNETCORE_ENVIRONMENT variable is set to "Development". However, when you run EF migrations it doesn't use this file. As a result, when you run migrations, your web app does not run on the Development environment and thus no access to the secret manager. This often causes confusion as to why EF migrations can't use the secret manager.
The Resolution
Make sure your environment variable "ASPNETCORE_ENVIRONMENT" is set to "Development" in your computer.
The way of using .AddUserSecrets<Startup>() will make a circular reference if we having our DbContext in a separate class library and using DesignTimeFactory
The clean way of doing that is:
public class DesignTimeDbContextFactory : IDesignTimeDbContextFactory<AppDbContext>
{
public AppDbContext CreateDbContext(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
#if DEBUG
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{project path}/appsettings.Development.json",
optional: true, reloadOnChange: true)
#else
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{startup project path}/appsettings.json",
optional: true, reloadOnChange: true)
#endif
.AddEnvironmentVariables()
.Build();
var connectionString = configuration.GetConnectionString("DefaultConnection");
var builder = new DbContextOptionsBuilder<AppDbContext>();
Console.WriteLine(connectionString);
builder.UseSqlServer(connectionString);
return new AppDbContext(builder.Options);
}
}
The Explanation:
Secret Manager is meant to be in the development time only, so this will not affect the migration in case if you have it in a pipeline in QA or Production stages, so to fix that we will use the dev connection string which exists in appsettings.Development.json during the #if Debug.
The benefit of using this way is to decouple referencing the Web project Startup class while using class library as your Data infrastructure.

How to Secure an AWS Rest service

I'm Looking for a simple Java example for a Secured ASW Rest service using aws sigv4, say for 'user01'
What needs to 'wrap' the following?
#RequestMapping(value = "/employee/{name}", method = RequestMethod.POST)
public Employee getEmployee(#PathVariable("name") String name) {
return new Employee(name);
}
What would the client look like. i.e. What needs to 'wrap' the following?
Employee emp = restTemplate.postForObject(url, request, Employee.class);
I't only the AWS components that have thrown me,
I believe its only takes a few lines of code from the EC2 SDK - but which lines?
The AWS documentation leads you all over the place but has few hard examples.

wso2 identity server 4.10 readonly with custom UserManagerStore class

I have a class that extends AbstractUserStoreManager.
My problem is currently with the getRoleListOfUser() method. The list returned here shows up in the Unassigned Role list in the UI, rather than in the Assigned Role list. i.e. the method appears to be behaving like it is returning all possible roles, rather than the ones assigned to the user.
Is there an internal property I need to set with this role list? Is this a known bug?
#Override
public String[] getRoleListOfUser(String userName)
throws UserStoreException {
// check whether roles exist in cache
try {
String[] names = getRoleListOfUserFromCache(this.tenantId, userName);
if (names != null) {
return names;
}
} catch (Exception e) {
// if not exist in cache, continue
}
List<String> roles = new ArrayList<String>();
// **code removed ** - but, roles is populated by a web service.
String [] roleList = (String[]) roles.toArray(new String[roles.size()]);
addToUserRolesCache(this.tenantId, userName, roleList);
// TODO: make roleList apply to assigned roles, rather than unassigned roles!
return roleList;
}
My implementation is unlike the existing examples, because I am using web service calls instead of querying directly against a user store with jdbc.
However, this appears similar to this issue: ldap-user-store-not-working
, except I am using my own class instead of LDAP.
Any suggestions would be appreciated.
4.1.1 release associates roles with users correctly in AD and LDAP. This version is not out yet, maybe another three weeks. But the alpha release I am experimenting with has many improvements regarding role and domain association.