I've imported a CSV file in AWS Databrew. By default, it has converted every date-time column in string. I need to check whether a field is in date-time format or not. When I'm trying to convert "Source" column into "timeStamp" format, it's giving null, for fields having time(hours) in single digit. Can anyone pleae tell me how to resolve this issue ?
Related
Trying to create a file output with date extension yyy-mm-dd etc. Cloud workflow does not have the date format functions of bigquery.
Closet I could get is ${time.format(sys.now())}, but that provides a timestamp.
Any way to parse to required format.
I suppose you mean a date extension in this format yyyy-mm-dd (your question says yyy-mm-dd).
${time.format(sys.now())} is giving me a timestamp like this:
2022-10-22T18:39:05.570539Z
With the text.substring() function ${text.substring(time.format(sys.now()), 0, 10)} I'm getting this string:
2022-10-22
I am working on importing data from S3 bucket to DynamoDB using datapipeline. The data is in csv format. I have been struggling with this for a week now, and finally came to know the real problem.
I have some fields, the important ones are (id (partitionKey), username (sortKey)) .
Now one of the entry in data is username has been set as a comma seperated value. ForExample: {"username": ""someuser,name"} . Now the irony of csv file is when mapping to dynamodb through csv (comma seperated) file. It takes comma as a new entry in the column. And so, it fails with the error The provided key element does not match the schema. Which will ofcourse right.
Is there any way I can overcome this issue? Thanks in advance for your suggestions.
EDIT:
The csv entry looks like this as an example.
1234567,"user,name",$123$,some#email.de,2002-05-28 14:07:04.0,2013-07-19 14:17:05.0,2020-02-19 15:32:18.611,2014-02-27 14:49:19.0,,,,
I have a datastore entity which has a column name timestamp. It was supposed to be a timestamp type but it is a string type as of now. Now, this column has values in 2 formats. YYYY-MM-DDTHH:MM:SSZ, YYYY-MM-DDTHH:MM:SS-offset_hours.
In our code, we are doing sorting on timestamp. Which is essentially sorting the "string". Now the question is, how can i convert this "string" column into "Timestamp".
Do i have to do any conversion for existing values which are in different format? How can i do it in terraform?
Google datastore has no notion of schema migrations, you're going to have to write a taskqueue job to do it.
The proper way would be to create a new column called timestamp_2 and backfill it. Here is an article GCP wrote:
https://cloud.google.com/appengine/articles/update_schema
I have a Csv file download from http://yann.lecun.com/exdb/mnist/index.html. I need to convert it to arff file format.
I tried running
java weka.core.converters.CSVLoader /home/saket/Documents/Assignment/NIST7000 > /home/saket/Documents/Myfile.arff
but it's giving following error
java.lang.IllegalArgumentException: Attribute names are not unique! Causes: '0' '0' '0' '0' '0' '0' '0'
Then I tried using http://weka.wikispaces.com/Converting+CSV+to+ARFF java code. BUt still same error came.
Can someone please suggest what i am doing wrong
There was no header fields in the csv. So I created a script and added column0,column1,...,class in the Csv file first line.
Then opened that generated csv file in weka.
I have encountered the same exception but with a different reason. I used "class" as the attribute name, but this word also appeared in my data as a string (after #data) and Weka did not correctly separate attribute and data.
Solved by simply renaming "class" attribute to something else like "s_label".
It happens when attribute name is same, in more than one column of the excel sheet. Just rename the column name which are same. It should be unique.I changed my third column name, which was same.Please have a look in screenshot attached.This can be done via an script too for large set of dataset. This worked for me.
I need to match on a date using the LIKE operator. The date should be in current user's local format, not in the general format.
So I use the strftime function like this:
WHERE strftime('%d.%m.%Y %H:%M', createdDate, 'localtime') LIKE '%{searchTerm}%'
Unfortunately this works only for fixed format '%d.%m.%Y %H:%M'. But I want to use the user's current locale format.
So I need to either:
1) Get the strftime format string from a C++ locale object
2) Make SQLite format the date using current locale itself
Spent already several hours on this to no avail.
Would be grateful if anyone would point me in the right direction on how to tackle this problem.
Example:
Given current locale is German I want to get something like "%d.%m.%Y %H:%m".
For an US locale I want to get "%m/%d/%Y %H:%m"
Normally the local date format can be obtained with the Windows GetDateFormat API (or GetDateFormatEx API for Vista). Your program could interrogate the API then transform the date accordingly. Following that, the date can be recorded in SQLite.
However, once can question the validity of storing timestamps in a specific format. That basically means a lot of code to manipulate each date, or no date manipulation at all. May I suggest, if it is possible, storing in a plain format (say ISO or UNIX timestamp) and working from that, outputing with whichever flavour of GetDateFormat is required?
OK, different answer.
Suppose you have your MyTable:
CREATE TABLE MyTable (
MyPrimaryKeyHnd INTEGER PRIMARY KEY,
...
CreatedDate TEXT);
(Where CreatedDate is in ISO format. I suppose you could also use a Unix timestamp. Your choice.)
Then a list of possible formats
CREATE TABLE TimeFormat (
TimeFormatHnd INTEGER PRIMARY KEY,
TimeFormatString TEXT NOT NULL,
TimeFormatDescriptor TEXT);
You allow your user to chose a format of their choice and keep that in a seperate table or INI file. TimeformatString would be your strftime() compatible format string (such as '%d.%m.%Y %H:%M'). You just need to build your query with whatever the user's choice is.