Getting Error while accessing DAX AWS from localhost client
Error:
EVERE: caught exception during cluster refresh: java.io.IOException: failed to configure cluster endpoints from hosts: [daxcluster*:8111]
java.io.IOException: failed to configure cluster endpoints from hosts:
Sample test code
public static String clientEndPoint = "*.amazonaws.com:8111";
DynamoDB getDynamoDBClient() {
System.out.println("Creating a DynamoDB client");
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().withRegion(Regions.US_EAST_1).build();
return new DynamoDB(client);
}
static DynamoDB getDaxClient(String daxEndpoint) {
ClientConfig daxConfig = new ClientConfig().withEndpoints(daxEndpoint);
daxConfig.setRegion(Regions.US_EAST_1.getName());
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
DynamoDB docClient = new DynamoDB(client);
return docClient;
}
public static void main(String args[]) {
DynamoDB client = getDaxClient(clientEndPoint);
Table table = client.getTable("dev.Users");
Item fa = table.getItem(new GetItemSpec().withPrimaryKey("userid", "tf#gmail.com"));
System.out.println(fa);
}
A DAX cluster runs within your VPC. To connect from your laptop to the DAX cluster, you need to VPN into your VPC: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html
Answer: DAX is only supported within VPC
I faced this same issue myself and found out the hard way.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html
Usage Notes For a list of AWS regions where DAX is available, refer to
https://aws.amazon.com/dynamodb/pricing.
DAX supports applications written in Java, Node.js, .Python and .NET,
using AWS-provided clients for those programming languages.
DAX does not support Transport Layer Security (TLS).
DAX is only available for the EC2-VPC platform. (There is no support
for the EC2-Classic platform.)
DAX clusters maintain metadata about the attribute names of items they
store, and that metadata is maintained indefinitely (even after the
item has expired or been evicted from the cache). Applications that
use an unbounded number of attribute names can, over time, cause
memory exhaustion in the DAX cluster. This limitation applies only to
top-level attribute names, not nested attribute names. Examples of
problematic top-level attribute names include timestamps, UUIDs, and
session IDs.
Note that this limitation only applies to attribute names, not their
values. Items like this are not a problem:
Related
I want to list all VMs that are in Manage Instance Group using google cloud client libraries for .NET.
Application Type: Console App, .NET 7.0
Library: Google.Cloud.Compute.V1
RegionInstanceGroupManagersClient regionInstanceGroupManagersClient = await RegionInstanceGroupManagersClient.CreateAsync();
var vms = regionInstanceGroupManagersClient.ListManagedInstancesAsync("projectId", "region", "mig_name");
await foreach (var vm in vms)
{
Console.WriteLine(vm.Instance);
}
Error:
Grpc.Core.RpcException: 'Status(StatusCode="InvalidArgument",
Detail="Invalid value for field 'pageToken': ''. Supplied restart
token corresponds to a zone not supported by this managed instance
group.")'
I'm trying understand this issue as in documenation pageToken is not required in request. According to documentation if pageToken is not provided - first page will be retrieved.
You are using a region based instance group to list the vms under managed instance group instead you need to use InstanceGroupManager.Types.ListManagedInstancesResults.
In this you need to use the string public const string Pageless = "PAGELESS"
This will ignore the pageToken query parameters and return the results in a single response. Can you try this and post if you get any errors.
Pageless : (Default) Pagination is disabled for the group's
listManagedInstances API method. maxResults and pageToken query
parameters are ignored and all instances are returned in a single
response.
We've been getting started with CubeJS. We are using BiqQuery, with the following heirarchy:
Project (All client)
Dataset (Corresponding to a single client)
Tables (Different data-types for a single client)
We'd like to use COMPILE_CONTEXT to allow different clients to access different Datasets based on the JWT that we issue them after authentication. The JWT includes the user info that'd cause our schema to select a different dataset:
const {
securityContext: { dataset_id },
} = COMPILE_CONTEXT;
cube(`Sessions`, {
sql: `SELECT * FROM ${ dataset_id }.sessions_export`,
measures: {
// Count of all session objects
count: {
sql: `Status`,
type: `count`,
},
In testing, we've found that the COMPILE_CONTEXT global variable is set when the server is launched, meaning that even if a different client submits a request to Cube with a different dataset_id, the old one is used by the server, sending info from the old dataset. The Cube docs on Multi-tenancy state that COMPILE_CONTEXT should be used in our scenario (at least, this is my understanding):
Multitenant COMPILE_CONTEXT should be used when users in fact access different databases. For example, if you provide SaaS ecommerce hosting and each of your customers have a separate database, then each ecommerce store should be modelled as a separate tenant.
SECURITY_CONTEXT, on the other hand, is set at Query time, so we tried to also access the appropriate data from SECURITY_CONTEXT like so:
cube(`Sessions`, {
sql: `SELECT * FROM ${SECURITY_CONTEXT.dataset_id}.sessions_export`,
But the query being sent to the database (found in the error log in the Cube dev server) is SELECT * FROM [object Object].sessions_export) AS sessions.
I'd love to inspect the SECURITY_CONTEXT variable but I'm having trouble finding how to do this, as it's only accessible within our cube Sql to my knowledge.
Any help would be appreciated! We are open to other routes besides those described above. In a nutshell, how can we deliver a specific dataset to a client using a unique JWT?
Given that all your datasets are in the same BigQuery database, I think your use-case reflects the Multiple DB Instances with Same Schema part of the documentation (that title could definitely be improved):
// cube.js
const PostgresDriver = require('#cubejs-backend/postgres-driver');
module.exports = {
contextToAppId: ({ securityContext }) =>
`CUBEJS_APP_${securityContext.dataset_id}`,
driverFactory: ({ securityContext }) =>
new PostgresDriver({
database: `${securityContext.dataset_id}`,
}),
};
// schema/Sessions.js
cube(`Sessions`, {
sql: `SELECT * FROM sessions_export`,
}
I am trying to connect my app to DynamoDB. I have set everything up the way Amazon recommends. But i still keep getting the same error over and over again:
7-21 11:02:29.856 10027-10081/com.amazonaws.cognito.sync.demo E/AndroidRuntime﹕ FATAL EXCEPTION: AsyncTask #1
Process: com.amazonaws.cognito.sync.demo, PID: 10027
java.lang.RuntimeException: An error occured while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:304)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:355)
at java.util.concurrent.FutureTask.setException(FutureTask.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:242)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: GIONOKT7E3AMTC4PO19CPLON93VV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:710)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:385)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2930)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.updateItem(AmazonDynamoDBClient.java:930)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper$SaveObjectHandler.doUpdateItem(DynamoDBMapper.java:1173)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper$2.executeLowLevelRequest(DynamoDBMapper.java:873)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper$SaveObjectHandler.execute(DynamoDBMapper.java:1056)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper.save(DynamoDBMapper.java:904)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper.save(DynamoDBMapper.java:688)
at com.amazonaws.cognito.sync.Utils.FriendsSyncManager.initalize_credentialprovider(FriendsSyncManager.java:43)
at com.amazonaws.cognito.sync.ValU.FriendListActivity$SyncFriends.doInBackground(FriendListActivity.java:168)
at com.amazonaws.cognito.sync.ValU.FriendListActivity$SyncFriends.doInBackground(FriendListActivity.java:160)
at android.os.AsyncTask$2.call(AsyncTask.java:292)
What could be the solution?
Okey it seems you need to add:
ddbClient.setRegion(Region.getRegion(Regions.EU_WEST_1));
// Add correct Region. In my case its EU_WEST_1
after the following line:
AmazonDynamoDBClient ddbClient = new AmazonDynamoDBClient(credentialsProvider);
Now it works. The table was successfully created.
Have a nice day and thanks!
It seems that the table you are trying to connect to doesn't exist. Verify the table name in your code agains the name of the table you created.
Please note that table name is case sensative.
You need to check a few things:
Check your credentials in your code:
private static String awsSecretKey = "your_secret_key"; //get it in AWS web UI
private static String awsAccessKey = "your_access_key"; //get it in AWS web UI
Check your Region code and set correct value:
client.setRegion(Region.getRegion(Regions.US_EAST_1));
You can get this value from your AWS Web Console. Details
Check does you have already created DynamoDB table & indexes under your Region.
If no, check your code
#DynamoDBTable(tableName = "Event")
public class Event implements Serializable {
public static final String CITY_INDEX = "City-Index";
public static final String AWAY_TEAM_INDEX = "AwayTeam-Index";
And create manualy from AWS Console or somehow else your table (Event in my case) and indexes (City-Index, AwayTeam-Index in my case). Please note - table and index name in case sensative.
Good sample - https://github.com/aws-samples/lambda-java8-dynamodb
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it.
Or you might select the wrong region.
Along with #Yuliia Ashomok's answer
AWS C++ SDK 1.7.25
Aws::Client::ClientConfiguration clientConfig;
clientConfig.region = Aws::Region::US_WEST_2;
If using Spring boot, you can configure the region via application properties:
in src/main/resources/application.yaml
cloud:
aws:
region:
static: eu-west-1
As this problem is some what platform agnostic. For anyone coming at the same problem from .NET/C# ...
You can instantiate your client with the Endpoint in the constructor:
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials, Amazon.RegionEndpoint.USEast1 );
I would have assumed that this would have been brought in from you AWS Profile but seems not although you could do something like this, where profile is imported from your SharedCredentialsFile:
new AmazonDynamoDBClient(credentials, profile.Region );
If you are sure that you have already created the table in DynamoDB but still getting this error. That means you may chances that your region is not correct. Just look at the top right corner of your AWS portal, with your profile dropdown. One another dropdown will give you the option to select your region. And now follow the process again with the right region.
Hope this works. This works for me.
I have following code written in DynamoDB for table creation. I am running this with Eclise. I have configured Tomcat server. I deployed my app on Tomcat and open the localhost URL.
DynamoDB dynamoDB = new DynamoDB(dynamo);
ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new AttributeDefinition()
.withAttributeName("Id").withAttributeType("N"));
ArrayList<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new KeySchemaElement().withAttributeName("Id")
.withKeyType(KeyType.HASH));
CreateTableRequest request1 = new CreateTableRequest()
.withTableName("abcdef")
.withKeySchema(keySchema)
.withAttributeDefinitions(attributeDefinitions)
.withProvisionedThroughput(new ProvisionedThroughput()
.withReadCapacityUnits(5L)
.withWriteCapacityUnits(6L));
System.out.println("Issuing CreateTable request for abcde");
Table table = dynamoDB.createTable(request1);
System.out.println("Waiting for abcde to be created...this may take a while...");
table.waitForActive();
It runs successfully. It also shows the table created successfully.
But when I open Amazon DynamoDB console, it does not reflect the newly created table. Can anyone suggest me what goes wrong here ? I have properly configured secretKey and accessKey.
Could it be a different region where the tables created and the console shows?
I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting