I need to get product URL of commerce via Data Export - however I have not found a way of getting expect building it from store domain and product RecId.
I created a new data-entity and drag&dropped RecId from the entity I inherit - but it still does not appear in the csv export/mapping.
Any ideas how to achieve this task?
I solved it by adding field to staging table and creating mapping with RecId.
Related
I try to update my table in Cassandra database. Right now I have table which looks like
foo(id, email) - email text type
I want to update the table to something like
foo(id, emails) - emails list of text type
I added new column
ALTER TABLE foo ADD emails set <text>;
but I don't know how to migrate value from email column to emails.
I am not very familiar with your domain, however you have to tell Cassandra how to create the new list of emails. Take a look at COPY if you don't have too many records. You can
Add a new column
Use copy to export your existing data using COPY TO
Apply your transformations on the exported data
Use COPY FROM to upload your new data.
If you have to do this update in real time while your table is growing, this solution won't work.
Cassandra don't have a native solution to do such operation.
Maybe you can create a simple migration program with your favorite programming language and follow these steps :
Create the new column in cassandra (with ALTER command).
Execute the migration program.
Delete the old column in cassandra.
The program it self will take all record from cassandra one by one an apply the transformation.
I am using an amazon machine learning for creating ML models for my applications. I have created a datasource and also ML model corresponding to that datasource, however in my application new data always keeps getting added so I have to update the data file in s3 which in turn used by the datasource. So the question is how can I update the datasource corresponding to that data file without changing the datasource id and also how to update the ML model corresponding to that datasource without changing the ML model id?
I know that there are methods in Boto3 to update datasource or ML model however as far as I know it only updates the name of those objects.
Any help would be appreciated.
You cannot do that. Amazon ML datasources are immutable, save for the human-readable name attribute. Instead, when you have new data, create a new datasource that points at the same data file(s) in S3, and then train a new ML model using that datasource.
I am doing a poc in Django and i was trying to create the admin console module for inserting,updating and deleting records through django admin console through models and it was doing fine
I have 2 questions.
1.I need to have model objects for existing tables which needs to be present in a particular schema.say schema1.table1
Here as of now i was doing poc for public schema.
So can it be done in a fixed defined schema and if yes how.Any reference would be very helpful
2.Also i wanted to update few columns in the table through console and the rest of the columns will be done automatically like currentimestamp and created date etc.Is it possible through default django console and if yes kindly share any reference
Steps for 1
What i have done as of now is created a class in model.py with attributes as author,title,body,timeofpost
Then i used sqlmigrate after makemigrations app to create the table and after migrating have been using the admin console for django to insert and update the records for the table created.But this is for POC only.
Now i need to do the same but for existing tables with whom i can interact and insert or update record for those existing tables through admin console.
Also the tables are getting created in public schema by default.But i am using postgres and the existing tables are present in different schemas and i wanted to insert,update and delete for this existing tables.
I am stuck up here as i dont know how to configure model with existing database schema tables through which we can interact through django console and also for different schemas and not in public schema
Steps for 2:
Also i wanted the user to give input for few columns like suppose in this case time of creation is not required to be given as input by user .Rather it should be taken care when the database is updating or creating
Thanks
In order for Django to "interact" with an existing database you need to create a model for it which can be done automatically as shown here. This assumes that your "external" database isn't going to be changed often because you'll have to keep your models in sync which is tricky - there are other approaches if you need that.
As for working with multiple database schemas - is there a reason you can't put your POC table in the same database as the others? Django supports multiple databases, but it will be harder to setup. See here.
Finally, it sounds like you are interested in setting the Django default field attribute. For an example of current time see here.
I think this is a recurrent question in the Internet, but unfortunately I'm still unable to find a successful answer.
I'm using Ruby on Rails 4 and I would like to create a model that interfaces with a SQL query, not with an actual table in the database. For example, let's suppose I have two tables in my database: Questions and Answers. I want to make a report that contains statistics of both tables. For such purpose, I have a complex SQL statement that takes data from these tables to build up the statistics. However the SELECT used in the SQL statement does not directly take values from neither Answers nor Questions tables, but from nested SELECTs.
So far I've been able to create the StatItem model, without any migration, but when I try StatItem.find_by_sql("...nested selects...") the system complains about unexisting table stat_items in the database.
How can I create a model whose instance's data is retrieved from a complex query and not from a table? If it's not possible, I could create a temporary table to store the data in there. In such case, how can I tell the migration file to not create such table (it would be created by the query)?
How about creating a materialized view from your complex query and following this tutorial:
ActiveRecord + PostgreSQL Materialized Views
Michael Kohl and his proposal of materialized views has given me an idea, which I initially discarded because I wrongly thought that a single database connection could be shared by two processes, but after reading about how Rails processes requests, I think my solution is fine.
STEP 1 - Create the model without migration
rails g model StatItem --migration=false
STEP 2 - Create a temporary table called stat_items
#First, drop any existing table created by older requests (database connections are kept open by the server process(es).
ActiveRecord::Base.connection.execute('DROP TABLE IF EXISTS stat_items')
#Second, create the temporary table with the desired columns (notice: a dummy column called 'id:integer' should exist in the table)
ActiveRecord::Base.connection.execute('CREATE TEMP TABLE stat_items (id integer, ...)')
STEP 3 - Execute an SQL statement that inserts rows in stat_items
STEP 4 - Access the table using the model, as usual
For example:
StatItem.find_by_...
Any comments/improvements are highly appreciated.
Function:
App.Fullquiz.find(5)
queries my server api for this record, but this record is already available in the model.
How to get this record without querying the server?
I think you can use
store.getById()
http://emberjs.com/api/data/classes/DS.Store.html#method_getById
Get a record by a given type and ID without triggering a fetch.
Good luck