We are working on FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the Cloud Formation template given by AWS in our AWS environment. Following is the template that we have deployed.
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or maintain client specific (customized) ids as primary key. Infact, in the runtime, it is generating its own ids and ignoring the ids provided by us.
Could you please let us know if there is any way to post the FHIR resource with client specific ids into FHIR server(Dynamo DB).
We have observed that by using "PUT" call(https://hl7.org/fhir/http.html#upsert), we might be able to generate the resource with customized ids as primary keys, but there is a precondition stating that "CapabilityStatement.rest.resource.updateCreate" Flag to be updated as "True".
Is there any way to update the "CapabilityStatement.rest.resource.updateCreate" flag through AWS console or by any manual process??
I have a problem with connection to DynamoDB. I get this exception:
com.amazonaws.services.dynamodb.model.ResourceNotFoundException:
Requested resource not found (Service: AmazonDynamoDB; Status Code:
400; Error Code: ResourceNotFoundException; Request ID: ..
But I have a table and region is correct.
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it
My problem was stupid but maybe someone has the same... I changed recently the default credentials of aws (~/.aws/credentials), I was testing in another account and forgot to rollback the values to the regular account.
I spent 1 day researching the problem in my project and now I should repay a debt to humanity and reduce the entropy of the universe a little.
Usually, this message says that your client can't reach a table in your DB.
You should check the next things:
1. Your database is running
2. Your accessKey and secretKey are valid for the database
3. Your DB endpoint is valid and contains correct protocol ("http://" or "https://"), and correct hostname, and correct port
4. Your table was created in the database.
5. Your table was created in the database in the same region that you set as a parameter in credentials. Optional, because some
database environments (e.g. Testcontainers Dynalite) don't have an incorrect value for the region. And any nonempty region value will be correct
In my case problem was that I couldn't save and load data from a table in tests with DynamoDB substituted by Testcontainers and Dynalite. I found out that in our project tables creates by Spring component marked with #Component annotation. And in tests, we are using a global setting for lazy loading components to test, so our component didn't load by default because no one call it in the test explicitly. ¯_(ツ)_/¯
If DynamoDB table is in a different region, make sure to set it before initialising the DynamoDB by
AWS.config.update({region: "your-dynamoDB-region" });
This works for me:)
Always ensure that you do one of the following:
The right default region is set up in the AWS CLI configuration files on all the servers, development machines that you are working on.
The best choice is to always specify these constants explicitly in a separate class/config in your project. Always import this in code and use it in the boto3 calls. This will provide flexibility if you were to add or change based on the enterprise requirements.
If your resources are like mine and all over the place, you can define the region_name when you're creating the resource.
I do this for all my instantiations as it forces me to think about what I'm putting/calling where.
boto3.resource("dynamodb", region_name='us-east-2')
I was getting this issue in my .NetCore Application.
Following fixed the issue for me in Startup class --> ConfigureServices method
services.AddDefaultAWSOptions(
new AWSOptions
{
Region = RegionEndpoint.GetBySystemName("eu-west-2")
});
I got Error warning Lambda : lifecycleIteration=0 lambda handler returned an error: ResourceNotFoundException: Requested resource not found
I spent 1 week to fix the issue.
And so its root cause and steps to find issue is mentioned in below Git Issue thread and fixed it.
https://github.com/soto-project/soto/issues/595
I have setup the cluster for WSO2-IS (2 instances on different machines) based on the information provided here - https://docs.wso2.com/display/CLUSTER44x/WSO2+Clustering+and+Deployment+Guide
Setup DB with a user store, shared registry, 2 local registries
Copied the DB driver jar to component lib
Updated the master-datasource.xml
Updated the registry.xml (made sure the master is read-only false and worker is read-only true)
Updated the AXIS2.xml and used WKA for membership scheme
Performed other changes as suggested in the link
Started the master with -Dsetup option and the worker without -Dsetup option.
Verified that the governance folder is shown as a symlink
I can see the interaction between both the nodes, there are Hazelcast messages related to node joining when the worker is started.
User created in 1 is able to login to the other instance, service provider are also automatically available when viewed through UI.
The problem is that when I create a secondary user store (JDBC) in the first node and goto the list in the second node - the secondary user store is not present and I cannot view the users in the user list too.
Am I missing something or is it the way the cluster is supposed to perform i.e. secondary user stores have to be shared in some other way?
Thanks,
Vikas
Secondary user store configurations are not synced between two nodes by default. Once you create a secondary user store from UI, it will create a file in following location.
[WSO2_IS]/repository/deployment/server/userstores/
These configuration file need to copy by manually or have to use some synchronization mechanism to copy file to other node. since this is not a frequent task better to copy this file.
Fore more information
https://docs.wso2.com/display/IS500/Configuring+Secondary+User+Stores
I am completely new to Amazon Web Services, however, I did get an account and I am able to browse our list of servers. I am trying to create a database backup programmatically using .NET. I have installed AWS for .NET and I have built and run the sample Empty console program.
I can see that I can create an instance of the RDS service with the following line:
AmazonRDS rds = AWSClientFactory.CreateAmazonRDSClient(RegionEndPoint.USEast1);
However, I notice that the rds.CreateDBSnapshot(); needs a request object but I don't see anything like CreateDBSnapshotRequest in the reference .dll, can anyone help with a working example?
Like you said CreateDBSnapshotRequest is the parameter you have to pass to this function.
CreateDBSnapshotRequest is defined in the Amazon.RDS.Model namespace within the AWSSDK.dll assembly (version 1.5.25.0)
Within CreateDBSnapshotRequest you must pass the the DB Instance Identifier (for example mydbinstance-1), that you defined when you invoked the CreateDBInstance (or one of it's related methods) and the identifier for the snapshot you wish to generate (example: my-snapshot-id) for this DB Instance.
edit / example
Well there are a couple ways to achieve this, here's one example - hope it clears up your doubts
using Amazon.RDS;
using Amazon.RDS.Model;
...
...
//gets the credentials from the default configuration
AmazonRDS rdsClient = AWSClientFactory.CreateAmazonRDSClient();
CreateDBSnapshotRequest dbSnapshotRequest = new CreateDBSnapshotRequest();
dbSnapshotRequest.DBInstanceIdentifier = "my-oracle-instance";
dbSnapshotRequest.DBSnapshotIdentifier = "daily-snapshot";
rdsClient.CreateDBSnapshot(dbSnapshotRequest);
Dont't forget that the DB Instance (in the example my-oracle-instance) must exist (duh :) and must be in the available state, like this:
I am currently upgrading an application that made us of sync framework 1 to version 2. As part of this I am using the new scoping system and dropping the use of SQL Server Change Tracking.
It would appear that in order to provision a remote database for sync framework a number of new tables and stored procedures must be created.
Is there a way, using the API, to remove these artefacts should they no longer be needed?
Thanks
See http://msdn.microsoft.com/en-us/library/ff928603%28SQL.110%29.aspx
Remove a scope:
// Remove the retail customer scope from the Sql Server client database.
SqlSyncScopeDeprovisioning clientSqlDepro = new SqlSyncScopeDeprovisioning(clientSqlConn);
// Remove the scope.
clientSqlDepro.DeprovisionScope("RetailCustomers");
Remove all sync metadata artifacts:
// Remove all scopes from the SQL Server Compact database.
SqlSyncScopeDeprovisioning clientSqlDepro = new SqlSyncScopeDeprovisioning(clientSqlConn);
clientSqlDepro.DeprovisionStore();
If you use a custom schema and/or prefix for table names, don't forget to add these to the SqlSyncScopeDeprovisioning object.