I have tried several ways to rename some column name in athena table.
after reading the following article
https://docs.aws.amazon.com/athena/latest/ug/alter-table-replace-columns.html
But I have get a no luck on it.
I tried
ALTER TABLE "users_data"."values_portions" REPLACE COLUMNS ('username/teradata' 'String', 'username_teradata' 'String')
Got error
no viable alternative at input 'alter table "users_data"."values_portions" replace' (service: amazonathena; status code: 400; error code: invalidrequestexception; request id: 23232ssdds.....; proxy: null)
You can refer to this document which talks about renaming columns. The query that you are trying to run will replace all the columns in the existing table with provided column list.
One strategy for renaming columns is to create a new table based on the same underlying data, but using new column names. The example mentioned in the link creates a new orders_parquet table called orders_parquet_column_renamed. The example changes the column o_totalprice name to o_total_price and then runs a query in Athena.
Another way of changing the column name is by simply going to AWS Glue -> Select database -> select table -> edit schema -> double click on column name -> type in new name -> save.
Related
I am trying to apply WHERE clause on DIMENSION of the AWS Timestream records. However, I got the error: Column does not exist
Here is my table schema:
The table schema
The table measure
First, I will show all the sample data I put in the table
SELECT username, time, manual_usage
FROM "meter-reading"."meter-metrics"
ORDER BY time DESC
LIMIT 4
The result:
Result
What I wanted to do is to query and filter the records by the Dimension ("username" specifically).
SELECT *
FROM "meter-reading"."meter-metrics"
WHERE measure_name = "OnceADay"
ORDER BY time DESC LIMIT 10
Then I got the Error: Column 'OnceADay' does not exist
I tried to search for any quotas for Dimensions name and check for error in my schema:
https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html#limits.naming
https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html#limits.system_identifier
But I didn't find that my "username" for the dimension violate any of the above rules.
I checked for some other queries by AWS Blog, the author used the WHERE clause for the Dimension filter normally:
https://aws.amazon.com/blogs/database/effective-queries-for-common-query-patterns-in-amazon-timestream/
I figured it out after I tried with the sample code. Turn out it was a silly mistake I believe.
Using apostrophe (') instead of single quotation marks ("") solved my problem.
SELECT *
FROM "meter-reading"."meter-metrics"
WHERE username = 'OnceADay'
ORDER BY time DESC LIMIT 10
I am on Application Express 21.1.0.
I added a column to a db table, and tried to add that column to the Form region based on that table.
I got error...
ORA-20999: Failed to parse SQL query! ORA-06550: line 4, column 15: ORA-00904: "NEEDED_EXAMS": invalid identifier
And I can not find the column in any "source> column" attribute of any page item of that form.
I can query the new column in "SQL COMMANDS".
The new column's name is "NEEDED_EXAMS". It's a varachar2(500).
Don't do it manually; use built-in feature by right-clicking the region and then select "Synchronize columns" from the menu as it'll do everything for you. It works for reports and forms.
Solved.
I have many parsing schemas. And I was creating tables through object browser in different schema than my app's parsing schema.
Suppose we have the following table in BigQuery:
CREATE TABLE sample_dataset.sample_table (
id INT
,struct_geo STRUCT
<
country STRING
,state STRING
,city STRING
>
,array_info ARRAY
<
STRUCT<
key STRING
,value STRING
>
>
);
I want to rename the columns inside the STRUCT and the ARRAY using an ALTER TABLE command. It's possible to follow the Google documentation available here for normal columns ("non-nested" columns) i:
ALTER TABLE sample_dataset.sample_table
RENAME COLUMN id TO str_id
But when I try to run the same command for nested columns I got errors from BigQuery.
Running the command for a column inside a STRUCT gives me the following message:
ALTER TABLE sample_dataset.sample_table
RENAME COLUMN `struct_geo.country` TO `struct_geo.str_country`
Error: ALTER TABLE RENAME COLUMN not found: struct_geo.country.
The exact same message appears when I run the same statement, but targeting a column inside an ARRAY:
ALTER TABLE sample_dataset.sample_table
RENAME COLUMN `array_info.str_key` TO `array_info.str_key`
Error: ALTER TABLE RENAME COLUMN not found: array_info.str_key
I got stuck since the BigQuery documentation about nested columns (available here) lacks examples of ALTER TABLE statements and refers directly to the default documentation for non-nested columns.
I understand that I can rename the columns by simply creating a new table using a CREATE TABLE new_table AS SELECT ... and then passing the new column names as aliases, but this would run a query over the whole table, which I'd rather avoid since my original table weighs way over 10TB...
Thanks in advance for any tips or solutions!
I have a large schema with ~70 tables and many of them connected to each other(194 #connection directives) like this:
type table1 #model {
id:ID!
name: String!
...
table2: table2 #connection
}
type table2 #model {
id:ID!
....
}
This works fine. Now my data amount is steadily growing and I need to be able to query for results and sort them.
I've read several articles and found one giving me the advice to create a #key directive to generate a GSI with 2 fields so I can say "Filter the results according to my filter property, sort them by the field "name" and return the first 10 entries, the rest accessible via nextToken parameter"
So I tried to add a GSI like this:
type table1 #model
#key(name: "byName", fields:["id","name"], queryField:"idByName"){
id:ID!
name: String!
...
table2: table2 #connection
}
running
amplify push --minify
I receive the error
Attempting to add a local secondary index to the table1Table table in the table1 stack. Local secondary indexes must be created when the table is created.
An error occured during the push operation: Attempting to add a local secondary index to the table1Table table in the table1 stack.
Local secondary indexes must be created when the table is created.
Why does it create a LSI instead of a GSI? Are there any ways to add #key directives to the tables after they have been created and filled? There are so many datasets from different tables linked with each other so just setting up a new schema would take ages.
Billingmode is PAY_PER_REQUEST if this has some impact.
Any ideas how to proceed?
Thanks in advance!
Regards Christian
If you are using new environment, delete folder #current-cloud-backend first.
Then amplify init created the folder again but alas, with only one file in it amplify-meta.json.
I need to find my schema name because i want to delete triggers which i created.
For example the following:
CREATE OR REPLACE TRIGGER TRIGGER_ORDER
BEFORE INSERT ON HOUSE_ORDER
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
WHEN (NEW.ORDER_ID IS NULL)
BEGIN
SELECT SEQ_ORDER_ID.NEXTVAL
INTO :NEW.ORDER_ID FROM DUAL;
END;
/
When i now try to drop the trigger:
DROP TRIGGER TRIGGER_ORDER
I get the following error:
ORA-04080: trigger 'TRIGGER_ORDER' does not exist
I found out that i need to call something like
DROP TRIGGER SCHEMA_NAME.TRIGGER_ORDER
but i have no idea what my schema name is. so how can i find it?
You should use the ALL_TRIGGERS view. There's a column named Table Owner which indicates the schema.
select * from all_triggers
where table_name = 'YOUR_TABLE'