builder.Configuration.AddSecretsManager(region: RegionEndpoint.EUCentral1,
configurator: options =>
{
options.SecretFilter = entry => entry.Name.StartsWith($"{env}_{appName}_");
options.KeyGenerator = (_, s) => s
.Replace($"{env}_{appName}_", string.Empty)
.Replace("__", ":");
options.PollingInterval = TimeSpan.FromSeconds(10);
});
builder.Services.Configure<DatabaseSettings>(
builder.Configuration.GetSection(DatabaseSettings.SectionName));
. If a hacker were to gain access to my EC2 Windows server, implementing the solution of not allowing the connection string to be read from the appsetting.json file would prevent them from accessing it. However, the hacker could potentially use a tool like dnSpy to reverse engineer the code and extract the connection string. Using an obfuscator would also prevent the hacker from being able to read the connection string. So why would I need AWS SecretsManager.
SecretsManager is about managing your secrets lifecycle. If a hacker gains access to your machine then anything in that machine is vulnerable and should be treated as compromised. For example if a machine has compromised a secret you can terminate that instance and use SecretsManager to rotate your secret and depending on how the other parts of your system are coding they can automatically pick up the rotation. It also provides access controls for who can access the secrets which can be easily revoked in the case of a compromised situation.
Related
So I have an API that makes calls to AWS services and I am using Boto3 in order to do this within my python application. The question I have deals with Boto3's client vs resource access levels. I think I understand the difference between them (one is low-level access the other is higher-level object-oriented service access) but my question is if it is okay to instantiate both a client and resource? For example, some resource functionality is easier to access using a resource over a client, but there is some functionality only the client has. Is it bad to instantiate both and use the easiest access level when needed or will there be some sort of disconnect when using two separate access levels when connecting to the same resource?
I am not running into any errors with my code to connect to SQS shown below, however I want to make sure that down the line I am not shooting myself in the foot by arbitrarily choosing between the client/resource for the same aws connection.
import boto3
REGION = 'us-east-1'
sqs_r = boto3.resource('sqs', REGION)
sqs_c = boto3.client('sqs', REGION)
def create_queue(queue_name):
queue_attributes = {
'FifoQueue': 'true',
'DelaySeconds': '0',
'MessageRetentionPeriod': '900', # 15 minutes to complete a command, else deleted.
'ContentBasedDeduplication': 'true'
}
try:
queue = sqs_r.get_queue_by_name(QueueName=queue_name)
except:
queue = sqs_r.create_queue(QueueName=queue_name, Attributes=queue_attributes)
def list_all_queues(queue_name_prefix=''):
all_queues = sqs_c.list_queues(QueueNamePrefix=queue_name_prefix)
print(all_queues['QueueUrls'])
print(type(all_queues))
Both of the above function work properly, one creates a queue and the other lists all of the queues at sqs. However, one function uses a resource and the other uses a client. Is this okay?
You can certainly use both.
The resource method actually uses the client method behind-the-scenes, so AWS only sees client-like calls.
In fact, the resource even contains a client. You can access it like this:
import boto3
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'mybucket',
'Key': 'mykey'
}
s3.meta.client.copy(copy_source, 'otherbucket', 'otherkey')
This example is from the boto3 documentation. It shows how a client is being extracted from a resource, and makes a client call, effectively identical to s3_client.copy().
Both client and resource just create a local object. There is no back-end activity involved.
I am using IdentityServer4 with .NET Core 2.0 on AWS's ElasticBeanstalk. I have a certificate for signing tokens. What's the best way to store this certificate and retrieve it from the application? Should I just stick it with the application files? Throw it in an environment variable somehow?
Edit: just to be clear, this is a token signing certificate, not an SSL certificate.
I don't really like the term 'token signing certificate' because it sounds so benign. What you have is a private key (as part of the certificate), and everyone knows you should secure your private keys!
I wouldn't store this in your application files. If someone gets your source code, they shouldn't also get the keys to your sensitive data (if someone has your signing cert, they can generate any token they like and pretend to be any of your users).
I would consider storing the certificate in AWS parameter store. You could paste the certificate into a parameter, which can be encrypted at rest. You then lock down the parameter with an AWS policy so only admins and the application can get the cert - your naughty Devs dont need it! Your application would pull the parameter string when needed and turn it into your certificate object.
This is how I store secrets in my application. I can provide more examples/details if required.
Edit -- This was the final result from Stu's guidance
The project needs 2 AWS packages from Nuget to the project
AWSSDK.Extensions.NETCORE.Setup
AWSSDK.SimpleSystemsManagement
Create 2 parameters in the AWS SSM Parameter Store like:
A plain string named /MyApp/Staging/SigningCertificate and the value is a Base64 encoded .pfx file
An encrypted string /MyApp/Staging/SigningCertificateSecret and the value is the password to the above .pfx file
This is the relevant code:
// In Startup class
private X509Certificate2 GetSigningCertificate()
{
// Configuration is the IConfiguration built by the WebHost in my Program.cs and injected into the Startup constructor
var awsOptions = Configuration.GetAWSOptions();
var ssmClient = awsOptions.CreateServiceClient<IAmazonSimpleSystemsManagement>();
// This is blocking because this is called during synchronous startup operations of the WebHost-- Startup.ConfigureServices()
var res = ssmClient.GetParametersByPathAsync(new Amazon.SimpleSystemsManagement.Model.GetParametersByPathRequest()
{
Path = "/MyApp/Staging",
WithDecryption = true
}).GetAwaiter().GetResult();
// Decode the certificate
var base64EncodedCert = res.Parameters.Find(p => p.Name == "/MyApp/Staging/SigningCertificate")?.Value;
var certificatePassword = res.Parameters.Find(p => p.Name == "/MyApp/Staging/SigningCertificateSecret")?.Value;
byte[] decodedPfxBytes = Convert.FromBase64String(base64EncodedCert);
return new X509Certificate2(decodedPfxBytes, certificatePassword);
}
public void ConfigureServices(IServiceCollection servies)
{
// ...
var identityServerBuilder = services.AddIdentityServer();
var signingCertificate = GetSigningCertificate();
identityServerBuilder.AddSigningCredential(signingCertificate);
//...
}
Last, you may need to set an IAM role and/or policy to your EC2 instance(s) that gives access to these SSM parameters.
Edit: I have been moving my web application SSL termination from my load balancer to my elastic beanstalk instance this week. This requires storing my private key in S3. Details from AWS here: Storing Private Keys Securely in Amazon S3
I am trying to build an application in Hyperledger v1.0 which has the following features,
Multi-sig contract execution
Discoverability of contracts
Selective
visibility.
But I am not able to find,
Any functions to retrieve role/user information
Define and create users with different roles.
Any examples on how can I make my smart contract discoverable by other smart contracts will also be highly appreciated.
You can obtain the certificate of the creator of the proposal in the chaincode execution in the following way:
creatorByte, err := stub.GetCreator()
if err != nil {
return shim.Error("Error stub.GetCreator")
}
bl, _ := pem.Decode(creatorByte)
if bl == nil {
return shim.Error("Could not decode the PEM structure")
}
cert, err := x509.ParseCertificate(bl.Bytes)
if err != nil {
return shim.Error("ParseCertificate failed")
}
First Question:
Any functions to retrieve role/user information
This is done using the CID library.
The client identity chaincode library enables you to write chaincode which makes access control decisions based on the identity of the client (i.e. the invoker of the chaincode). In particular, you may make access control decisions based on either or both of the following associated with the client:
the client identity's MSP (Membership Service Provider) ID
an attribute associated with the client identity.
Attributes are simply name and value pairs associated with an identity. For example, email=me#gmail.com indicates an identity has the email attribute with a value of me#gmail.com
For Nodejs: https://fabric-shim.github.io/master/fabric-shim.ClientIdentity.html
Second Question:
Define and create users with different roles.
https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#attribute-based-access-control
fabric-ca-client register --id.name user1 --id.secret user1pw --id.type user --id.affiliation org1 --id.attrs 'app1Admin=true:ecert,email=user1#gmail.com'
or while enrolling
fabric-ca-client enroll -u http://user1:user1pw#localhost:7054 --enrollment.attrs "email,phone:opt"
https://github.com/hyperledger/fabric/blob/release-1.3/core/chaincode/lib/cid/README.md:
the Attributes are stored inside the X509 certificate as an extension with an ASN.1 OID (Abstract Syntax Notation Object IDentifier) of 1.2.3.4.5.6.7.8.1. The value of the extension is a JSON string of the form {"attrs":{:
see https://github.com/hyperledger/fabric-samples/blob/release-1.3/fabric-ca/README.md for
How to use the Hyperledger Fabric CA client and server to generate all crypto material rather than using cryptogen. The cryptogen tool is not intended for a production environment because it generates all private keys in one location which must then be copied to the appropriate host or container. This sample demonstrates how to generate crypto material for orderers, peers, administrators, and end users so that private keys never leave the host or container in which they are generated.
I'm goingo to answer single point of your question:
Define and create users with different roles.
When you create a user or a component, you create it via Fabric CA, i.e. when you create something, you define what it is going to be: peer, orderer, user... So, the kind of user that it is depends on its role.
I don't know if I answered any or your questions. Could you give more info about them?
I am trying to execute following flow:
user hits AWS Gateway (REST),
it triggers AWS Lambda,
that uses Tinkerpop/Gremlin connects to
TitanDB on EC2, that uses
AWS DynamoDB in cloud (not on EC2) as backend.
Right now I have managed to crete fully working TitanDB instance on EC2, that stores data in DynamoDB in cloud.
I am also able to connect from AWS Lambda to EC2 through Tinkerpop/Gremlin BUT only this way:
Cluster.build()
.addContactPoint("10.x.x.x") // ip of EC2
.create()
.connect()
.submit("here I type my query as string and it will work");
And this works, however I strongly prefer to use "Criteria API" (GremlinPipeline) instead of plain Gremlin language.
In other words, I need ORM or something like that.
I know, that Tinkerpop includes it.
I have realized, that what I need is object of class Graph.
This is what I have tried:
Graph graph = TitanFactory
.build()
.set("storage.hostname", "10.x.x.x")
.set("storage.backend", "com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager")
.set("storage.dynamodb.client.credentials.class-name", "com.amazonaws.auth.DefaultAWSCredentialsProviderChain")
.set("storage.dynamodb.client.credentials.constructor-args", "")
.set("storage.dynamodb.client.endpoint", "https://dynamodb.ap-southeast-2.amazonaws.com")
.open();
However, it throws "Could not find implementation class: com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager".
Of course, computer is correct, as IntelliJ IDEA also cannot find it.
My dependencies:
//
// aws
compile 'com.amazonaws:aws-lambda-java-core:+'
compile 'com.amazonaws:aws-lambda-java-events:+'
compile 'com.amazonaws:aws-lambda-java-log4j:+'
compile 'com.amazonaws:aws-java-sdk-dynamodb:1.10.5.1'
compile 'com.amazonaws:aws-java-sdk-ec2:+'
//
// database
// titan 1.0.0 is compatible with gremlin 3.0.2-incubating, but not yet with 3.2.0
compile 'com.thinkaurelius.titan:titan-core:1.0.0'
compile 'org.apache.tinkerpop:gremlin-core:3.0.2-incubating'
compile 'org.apache.tinkerpop:gremlin-driver:3.0.2-incubating'
What is my goal: have fully working Graph object
What is my problem: I don't have DynamoDBStoreManager class, and I do not know what dependency I have to add.
My additional question is: why connecting through Cluster class requires only IP and works, but TitanFactory requires properties like those I have used on gremlin-server on EC2?
I do not want to create second server, I just want to connect as client to it and take Graph object.
EDIT:
After adding resolver, it builds, in output I get multiple:
13689 [TitanID(0)(4)[0]] WARN com.thinkaurelius.titan.diskstorage.idmanagement.ConsistentKeyIDAuthority - Temporary storage exception while acquiring id block - retrying in PT2.4S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.342S => too slow, threshold is: PT0.3S
and execution hangs on open() method, so does not allow me to execute any queries.
For the DynamoDBStoreManager class, you would need this dependency:
compile 'com.amazonaws:dynamodb-titan100-storage-backend:1.0.0'
Then for the DynamoDBLocal issue, try adding this resolver:
resolvers += "AWS DynamoDB Local Release Repository" at "http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release"
I'm not entirely clear on what this means -- "Criteria API" instead of plain Gremlin language. I'm guessing that you mean that you want to interact with the graph using Java rather than passing Gremlin as a string over to a running Titan/Gremlin Server? If this is the case, then you don't need to start a Titan/Gremlin Server at all (step 4 above). Write an AWS Lambda program (step 2-3 above) that creates a direct Titan client connection via TitanFactory, where all of the Titan configuration properties are for your DynamoDB instance (step 5 above).
I am trying to use EWS, first time trying to use the ExchangeServiceBinding. The code I am using is below:
_service = new ExchangeServiceBinding();
//_service.Credentials = new NetworkCredential(userName, userPassword, this.Domain);
_service.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
_service.Url = this.ServiceURL;
ExchangeImpersonationType ei = new ExchangeImpersonationType();
ConnectingSIDType sid = new ConnectingSIDType();
sid.PrimarySmtpAddress = this.ExchangeAccount;
ei.ConnectingSID = sid;
_service.ExchangeImpersonation = ei;
The application is an aspnet 3.5 trying to create a task using EWS. I have tried to use impersonation because I will not know the logon user's domain password, so I thought impersonation would be the best fit. Any thoughts on how I can utilize impersonation? Am I setting this correctly, I get an error while trying to run my application. I also tried without impersonation just to try to see if I can create a task, no luck either. Any help would be appreciated. Thanks.
Without broader context of your code snip, I can't tell for sure what's wrong, but here are a few things you might find useful...
You mention you had trouble connecting without impersonation.
I'm assuming you are using Exchange Server 2007 SP1, yes?
Do you have a mailbox for which you do know the username and password? If so, consider trying to connect to that mailbox, just to see if you can send an email or query for inbox count. That will help verify your connection at least.
As to exchange impersonation,
have the permissions been set on the Client Access Server (CAS) to enable impersonation?
Have the permissions been set on either the mailbox or mailbox database (containing the mailbox you are attempting to access)?
are you in a cross-forest scenario that requires additional trust relationships?
If not, that might explain why you cannot connect.
Some links you might find useful
Configuring (http://msdn.microsoft.com/en-us/library/bb204095.aspx)
Using Exchange impersonation (http://msdn.microsoft.com/en-us/library/bb204088.aspx)
Access multiple resource mailboxes (http://msexchangeteam.com/archive/2007/12/13/447731.aspx)