I have a large schema with ~70 tables and many of them connected to each other(194 #connection directives) like this:
type table1 #model {
id:ID!
name: String!
...
table2: table2 #connection
}
type table2 #model {
id:ID!
....
}
This works fine. Now my data amount is steadily growing and I need to be able to query for results and sort them.
I've read several articles and found one giving me the advice to create a #key directive to generate a GSI with 2 fields so I can say "Filter the results according to my filter property, sort them by the field "name" and return the first 10 entries, the rest accessible via nextToken parameter"
So I tried to add a GSI like this:
type table1 #model
#key(name: "byName", fields:["id","name"], queryField:"idByName"){
id:ID!
name: String!
...
table2: table2 #connection
}
running
amplify push --minify
I receive the error
Attempting to add a local secondary index to the table1Table table in the table1 stack. Local secondary indexes must be created when the table is created.
An error occured during the push operation: Attempting to add a local secondary index to the table1Table table in the table1 stack.
Local secondary indexes must be created when the table is created.
Why does it create a LSI instead of a GSI? Are there any ways to add #key directives to the tables after they have been created and filled? There are so many datasets from different tables linked with each other so just setting up a new schema would take ages.
Billingmode is PAY_PER_REQUEST if this has some impact.
Any ideas how to proceed?
Thanks in advance!
Regards Christian
If you are using new environment, delete folder #current-cloud-backend first.
Then amplify init created the folder again but alas, with only one file in it amplify-meta.json.
Related
I have a DynamoDB table like this:
I want to list all posts irrespective of users, i-e by getting all data whose sort key is "post". How can I achieve this?
I have heard about Global Secondary Index, but couldn't figure out how to use them.
You create a global secondary index with a Key Schema like this:
Partition Key: SK attribute of the base table
Sort Key: PK attribute of the base table
It's called an inverted index. Then you can Query the Global Secondary Index by specifying the IndexName in the Query and search for all items that have "post" as the value for SK.
I have tried several ways to rename some column name in athena table.
after reading the following article
https://docs.aws.amazon.com/athena/latest/ug/alter-table-replace-columns.html
But I have get a no luck on it.
I tried
ALTER TABLE "users_data"."values_portions" REPLACE COLUMNS ('username/teradata' 'String', 'username_teradata' 'String')
Got error
no viable alternative at input 'alter table "users_data"."values_portions" replace' (service: amazonathena; status code: 400; error code: invalidrequestexception; request id: 23232ssdds.....; proxy: null)
You can refer to this document which talks about renaming columns. The query that you are trying to run will replace all the columns in the existing table with provided column list.
One strategy for renaming columns is to create a new table based on the same underlying data, but using new column names. The example mentioned in the link creates a new orders_parquet table called orders_parquet_column_renamed. The example changes the column o_totalprice name to o_total_price and then runs a query in Athena.
Another way of changing the column name is by simply going to AWS Glue -> Select database -> select table -> edit schema -> double click on column name -> type in new name -> save.
I have a large target table with columns (id, value). I want to update value='old' to value='new'.
The simplest way would be to UPDATE target SET value='new' WHERE value='old';
However, this deletes and creates new rows and is not recommended, possibly. So I tried to do a merge column update:
# staging
CREATE TABLE stage (LIKE target INCLUDING DEFAULTS);
INSERT INTO stage (SELECT id, value FROM target WHERE value=`old`);
UPDATE stage SET value='new' WHERE value='old'; # ??? how do you update value?
# merge
begin transaction;
UPDATE target
SET value = stage.value FROM stage
WHERE target.id = stage.id and target.distkey = stage.distkey; # collocated join?
end transaction;
DROP TABLE stage;
This can't be the best way of creating the table stage: I have to do all these UPDATE delete/writes when I update this way. Is there a way to do it in the INSERT?
Is it necessary to force the collocated join when I use CREATE TABLE LIKE?
Are you updating all the rows in the table?
If yes you can use CTAS (create table as) which is recommended method
Assuming you table looks like this
table1
id, col1,col2, value
You can use the following SQL to create a new table
CREATE TABLE tmp_table AS
SELECT id, col1,col2, 'new_value'
FROM table1;
After you verify data in tmp_table
DROP TABLE table1;
ALTER TABLE tmp_table RENAME TO table1;
If you are not updating all the rows you can use a filter to do a CTAS and insert the rest of the rows to the new table, let me know if you need more info if this is the case
CREATE TABLE tmp_table AS
SELECT id, col1,col2, 'new_value'
FROM table1
WHERE value = 'old'
INSERT INTO tmp_table SELECT * from table1;
Next step would be DROP the tmp table and rename table1
Update: Based on your comment you can do the following, let me know if this solves your case.
This method basically creates a new table to replace your existing table.
I have used some of your code
CREATE TABLE stage (LIKE target INCLUDING DEFAULTS);
INSERT INTO stage SELECT id, 'new' FROM target WHERE value=`old`;
Above INSERT inserts rows to be updated with 'new', no need to run an UPDATE after this.
Bring unchanged rows
INSERT INTO stage SELECT id, value FROM target WHERE value!=`old`;
After this point you have target table which is your original table intact
stage table will have both sets of rows, updated rows with 'new' value and rows you did not want to change
To replace your target with stage
DROP TABLE target;
or to keep it further verification
ALTER TABLE target RENAME TO target_old;
ALTER TABLE stage RENAME TO target;
From a redshift developer:
This case doesn't require an upsert, or update+insert, and it is fine to just run the update:
UPDATE target SET value='new' WHERE value='old';
Another way would be to INSERT the rows you need and DELETE the other rows, but that's unnecessarily complicated.
I am using FileMaker Pro 13. I need to add Portal to show data from related table.
Some fields which I wanna add to Portal are "Foreign keys" from that related table that are linked-related to third table.
Results I am getting in Portal from those fields are numbers, but I need data (text) from that third table that is related to that "Foreign key".
Is it possible to create portal that shows text related to that foreign key, and if it is, how to achieve that?
Thanks a lot!
Example schema is on link https://www.lucidchart.com/invitations/accept/e45dfdfd-185d-46c8-ad9e-e8e8dc270ee7
In general words, I want to add Portal on layout based on Table3, so I can view related data from Table2, and it would have fields tbl2item, TBL1_foreign key (from where I pull data from Table 1 when I enter data in Table 2, using pop up menu). And in Portal data I need TBL1_foreign key to be represented as text from related table instead of auto-numbers.
Assuming the relationship:
Table 1 --> Table 2 --> Table 3
You are in the layout based on Table 1. Make sure there is relationship to Table 3 and it is in the same group (TOG) in Manage Databases. Just place the field with text from the Table 3 on your portal row and it should work.
Make sure you select the correct relationship for Table 3. In drop-down list it should be in "Related" group. Should look like "Table 3::myFiled" on the layout, with the name the same as the name of your table instance (TO) from "Manage database" which could be different from your base table name
The other option would be to use value list.
I do not see your file, so if it does not work for you, post more details about your setup.
I'm using the following: dynamodb2, boto, python. I have the following code for creating a table:
table = Table.create('mySecondTable',
schema=[HashKey('ID')],
RangeKey('advertiser'),
throughput={'read':5,'write':2},
global_indexes=[GlobalAllIndex('otherDataIndex',parts=[
HashKey('date',data_type=NUMBER),
RangeKey('publisher', date_type=str),
],throughput={'read':5,'write':3})],
connection=conn)
I would like to be able to have the following data that I can query by:
ID, advertiser, date, publisher, size, and color
That means I need a different schema. When I add additional points it does not query unless the column name is listed in the schema.
The problem however is that right now I am only able to query by Id, advertiser, date, and publisher in this case. How can I add additional columns that I can query by?
I read this which appears to say that it is possible:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
However there is no example here:
http://boto.readthedocs.org/en/latest/dynamodb2_tut.html
I tried adding an additional range key however it doesn't work (cannot have duplicates)
I'd like it to be like:
table = Table.create('mySecondTable',
schema=[
RangeKey('advertiser'),
otherKey('date')
fourthKey('publisher') ... etc
throughput={'read':5,'write':2},
connection=conn)
Thanks!
If you want to add additional range keys you need to use Local secondary index.
You can query the LSI in the same way that you query the base table. You need to provide an exact value for the hashkey and a comparison-predicate for range key.