What is CLI execute error in SAS? - sas

I am trying to use SAS to upload a table into teradata. The table started to upload, the names of the variable got uploaded and the table was created. However I got
ERROR: CLI execute error: [Teradata][ODBC Teradata Driver] Character string truncated
What is this?

CLI execute error is an often-not-very-helpful error message SAS returns when it receives an error from the RDBMS when it submits code; in this case it may not be a true error, it may simply be saying that one or more of your character strings didn't fit in the columns.

Related

AWS Glue LAUNCH ERROR | java.net.URISyntaxException: Illegal character in scheme name at index 0: s3://py-function-bucket/aws_custom_functions.zip

I've started to develop an ETL Job in AWS Glue using a notebook to verify the results of each step. When running one cell at the time, the job runs correctly. However, when using the run option for Glue, the following error appears:
LAUNCH ERROR | java.net.URISyntaxException: Illegal character in scheme name at index 0:
s3://py-function-bucket/aws_custom_functions.zip
The magics I'm using to configure the notebook are as follows:
%extra_py_files s3://py-function-bucket/aws_custom_functions.zip
%number_of_workers 2
%%configure
{
"region": "region",
"iam_role": "arn:aws:iam::glue_role"
}
This means that I have a bucket with a zip file that contains additional functions to use in the notebook, which they need to be imported.
I have tried using the full URL instead of the URI:
%extra_py_files https://py-function-bucket.s3.region.amazonaws.com/aws_custom_functions.zip
Shows the same error as before but with the full URL.
Finally I tried running the cell with the magics and deleting the cell. This only causes that the input argument for importing the additional functions is not passed and therefore causes error that the functions are not defined.
Anyone else having this issue? A work around would be declaring the functions in the notebook but this would mean doing the same for all Jobs I plan to develop.

Getting error while using google data transfer to transfer multiple Json file from GCSto Bigquery

I'm getting error while I'm trying to transfer file from Google cloud storage to google big query.
This is the error :
failed with error INVALID_ARGUMENT : Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more
details.
When I look the error I have this :
Error while reading data, error message: JSON parsing error in row starting at position 0:
No such field: _ttl.
I don't understand where is the problem.
if I just try to transfer 1 files, the data is sent to my Bigquery table. but I need to transfer all the files every days.
This is my data format example :
{"createdAt":"2021-12-07T12:07:44.547Z","_lastChangedAt":1638878864561,"isMain":true,"__typename":"Accounting","name":"main","belongTo":"siewecarine","id":"d00ae4ad-c661-40b8-9e90-e0f53b2211fb","_version":1,"updatedAt":"2021-12-07T12:07:44.547Z"}
{"createdAt":"2021-12-07T12:09:12.583Z","_lastChangedAt":1638878952618,"isMain":false,"__typename":"Accounting","name":"test1","belongTo":"mbappe","id":"ee42db80-a6f4-400c-a089-061cc7eec967","_version":1,"updatedAt":"2021-12-07T12:09:12.583Z"}
My data come from a S3 bucket and I batch it from dynamodb.

Error Bigquery/dataflow "Could not resolve table in Data Catalog"

I'm having troubles with a job I've set up on dataflow.
Here is the context, I created a dataset on bigquery using the following path
bi-training-gcp:sales.sales_data
In the properties I can see that the data location is "US"
Now I want to run a job on dataflow and I enter the following command into the google shell
gcloud dataflow sql query ' SELECT country, DATE_TRUNC(ORDERDATE , MONTH),
sum(sales) FROM bi-training-gcp.sales.sales_data group by 1,2 ' --job-name=dataflow-sql-sales-monthly --region=us-east1 --bigquery-dataset=sales --bigquery-table=monthly_sales
The query is accepted by the console and returns me a sort of acceptation message.
After that I go to the dataflow dashboard. I can see a new job as queued but after 5 minutes or so the job fails and I get the following error messages:
Error
2021-09-29T18:06:00.795ZInvalid/unsupported arguments for SQL job launch: Invalid table specification in Data Catalog: Could not resolve table in Data Catalog: bi-training-gcp.sales.sales_data
Error 2021-09-29T18:10:31.592036462ZError occurred in the launcher
container: Template launch failed. See console logs.
My guess is that it cannot find my table. Maybe because I specified the wrong location/region, since my table is specified to be location in "US" I thought it would be on a US server (which is why I specified us-east1 as a region), but I tried all us regions with no success...
Does anybody know how I can solve this ?
Thank you
This error occurs if the Dataflow service account doesn't have access to the Data Catalog API. To resolve this issue, enable the Data Catalog API in the Google Cloud project that you're using to write and run queries. Alternately, assign the roles/datacatalog.

Google BigQuery cannot read some ORC data

I'm trying to load ORC data files stored in GCS into BigQuery via bq load/bq mk and facing an error below. The data files copied via hadoop discp command from on-prem cluster's Hive instance version 1.2. Most of the orc-files are loaded successfully, but few are not. There is no problem when I read this data from Hive.
Command I used:
$ bq load --source_format ORC hadoop_migration.pm hive/part-v006-o000-r-00000_a_17
Upload complete.
Waiting on bqjob_r7233761202886bd8_00000175f4b18a74_1 ... (1s) Current status: DONE
BigQuery error in load operation: Error processing job '<project>-af9bd5f6:bqjob_r7233761202886bd8_00000175f4b18a74_1': Error while reading data, error message:
The Apache Orc library failed to parse metadata of stripes with error: failed to open /usr/share/zoneinfo/GMT-00:00 - No such file or directory
Indeed, there is no such file and I believe it shouldn't be.
Google doesn't know about this error message but I've found similar problem here: https://issues.apache.org/jira/browse/ARROW-4966. There is a workaround for on-prem servers of creating sym-link to /usr/share/zoneinfo/GMT-00:00. But I'm in a Cloud.
Additionally, I found that if I extract data from orc file via orc-tools into json format I'm able to load that json file into BigQuery. So I suspect that the problem not in the data itself.
Does anybody came across such problem?
Official Google support position below. In short BigQuery doesn't understand some timezone's description and we suggested to change it in the data. Our workaround for this was to convert ORC data to parquet and then load it into table.
Indeed this error can happen. Also when you try to execute a query from the BigQuery Cloud Console such as:
select timestamp('2020-01-01 00:00:00 GMT-00:00')
you’ll get the same error. It is not just related to the ORC import, it’s how BigQuery understands timestamps. BigQuery supports a wide range of representations as described in [1]. So:
“2020-01-01 00:00:00 GMT-00:00” -- incorrect timestamp string literal
“2020-01-01 00:00:00 abcdef” -- incorrect timestamp string literal
“2020-01-01 00:00:00-00:00” -- correct timestamp string literal
In your case the problem is with the representation of the time zone within the ORC file. I suppose it was generated that way by some external system. If you were able to get the “GMT-00:00” string with preceding space replaced with just “-00:00” that would be the correct name of the time zone. Can you change the configuration of the system which generated the file into having a proper time zone string?
Creating a symlink is only masking the problem and not solving it properly. In case of BigQuery it is not possible.
Best regards,
Google Cloud Support
[1] https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#time_zones

Coldfusion MX7 error with file upload

I am using coldfusion mx7 server. While upload an excel file with(97-2003) format, I am getting the following error:
Unable to construct record instance, the following exception occured: null
I am getting this error when I enters some data and save in my desktop with the format(97-2003) and after using a cfx tag to dump the uploaded data, I am getting this error. But if I just upload the template only without entering any data it will shows/dump the column name in template
Is there any way to upload the same excel file with MX7?
Thanks in advance
Are you protecting the sheet/workbook after you enter data? Try without it. Also what OS is the CFMX 7 on?