Is there a way to query the ember data store to find the max id for a given model? I know I could find all, walk through the objects, store ids when they are greater than the previous, etc, etc.
I'm new to Ember, I'm used to being able to call aggregate methods on database, you know, max(), min(), sum() all that stuff. There HAS to be a way to do this in ember, right? I have searched and searched, I'm honestly a little mystified that I can't find anything for what has to be a very common use-case.
I'm currently using fixtures, want to add a new line to an order. When I create the new record I need to find the max id of all other records so I can increment it for the new record.
Ember (and web applications in general) don't communicate with a database directly.
It's easy enough to calculate the next id for your fixtures:
let ids = models.mapBy('id');
let nextId = Math.max(...ids) + 1;
Or you could use ember-cli-mirage in your tests which takes care of this for you.
Related
I have a DynamoDB-based web application that uses DynamoDB to store my large JSON objects and perform simple CRUD operations on them via a web API. I would like to add a new table that acts like a categorization of these values. The user should be able to select from a selection box which category the object belongs to. If a desirable category does not exist, the user should be able to create a new category specifying a name which will be available to other objects in the future.
It is critical to the application that every one of these categories be given a integer ID that increments starting the first at 1. These numbers that are auto generated will turn into reproducible serial numbers for back end reports that will not use the user-visible text name.
So I would like to have a simple API available from the web fronted that allows me to:
A) GET /category : produces { int : string, ... } of all categories mapped to an ID
B) PUSH /category : accepts string and stores the string to the next integer
Here are some ideas for how to handle this kind of project.
Store it in DynamoDB with integer indexes. This leaves has some benefits but it leaves a lot to be desired. Firstly, there's no auto incrementing ID in DynamoDB, but I could definitely get the state of the table, create a new ID, and store the result. This might have issues with consistency and race conditions but there's probably a way to achieve this safely. It might, however, be a big anti pattern to use DynamoDB this way.
Store it in DynamoDB as one object in a table with some random index. Just store the mapping as a JSON object. This really forgets the notion of tables in DynamoDB and uses it as a simple file. It might also run into some issues with race conditions.
Use AWS ElasticCache to have a Redis key value store. This might be "the right" decision but the downside is that ElasticCache is an always on DB offering where you pay per hour. For a low-traffic web site like mine I'd be paying minumum $12/mo I think and I would really like for this to be pay per access/update due to the low volume. I'm not sure there's an auto increment feature for Redis built in the way I'd need it. But it's pretty trivial to make a trasaction that gets the length of the table, adds one, and stores a new value. Race conditions are easily avoid with this solution.
Use a SQL database like AWS Aurora or MYSQL. Well this has the same upsides as Redis, but it's also more overkill than Redis is, and also it costs a lot more and it's still always on.
Run my own in memory web service or MongoDB etc... still you're paying for constant containers running. Writing my own thing is obviously silly but I'm sure there are services that match this issue perfectly but they'd all require a constant container to run.
Is there a food way to just store a simple list, or integer mapping like this that doesn't cost a constant monthly cost? Is there a better way to do this with DynamoDB?
Store the maxCounterValue as an item in DyanamoDB.
For the PUSH /category, perform the following:
Get the current maxCounterValue.
TransactWrite:
Put the category name and id into a new item with id = maxCounterValue + 1.
Update the maxCounterValue +1, add a ConditionExpression to check that maxCounterValue = :valueFromGetOperation.
If TransactWrite fails, start at 1 again, try X more times
I'm reviewing some code and I came across a line that does the following:
Person.find_by(name: "Tom").id
The above code gets the FIRST record with a name of "Tom", then builds a model, then gets the id of that model. Since we only want the id, the process of retreiving all data in the model and initializing the model is unneeded. What's the best way to optimize this using active record queries?
I'd like to avoid a raw sql solution. So far I have been able to come up with:
Person.where(name: "Tom").pluck(:id).first
This is faster in some situations since pluck doesn't build the actual model object and doesn't load all the data. However, pluck is going to build an array of records with name "Tom", whereas the original statement only ever returns a single object or nil - so this technique could potentially be worse depending on the where statement. I'd like to avoid the array creation and potential for having a very long list of ids returned from the server. I could add a limit(1) in the chain,
Person.where(name: "Tom").limit(1).pluck(:id).first
but is seems like I'm making this more complicated than it should be.
With Rails 6 you can use the new pick method:
Person.where(name: 'Tom').pick(:id)
This is a little verbose, but you can use select_value from the ActiveRecord connection like this:
Person.connection.select_value(Person.select(:id).where(name: 'Tom').limit(1))
This might work depending on what you're looking for.
Person.where(name: "Tom").minimum(:id)
Person.where(name: "Tom").maximum(:id)
These will sort by id value while the Person.where(name: "Tom").first.id will sort off of your default sort. Which could be id, created_at, or primary_key.
eitherway test and see if it works for you
How do people deal with index data (the data usually shown on index pages, like a customer list) -vs- the model detail data?
When somebody goes to the customer/index route -- they only need access to a small subset of the full customer resource. Since I am dealing with legacy data, my customer model has > 10 relationships. It seems wasteful to have the api return a complete and full customer representation for every customer just to render a list/select/index view.
I know those relationships are somewhat lazy-loaded, but it still takes effort on the backend to pull all those relationships in. For some relationships (such as customer->invoices) this could be a large list of ids.
I feel answers to this can be very opinionated. But my two cents:
The API you are drawing on for your data should have an end-point to fetch the subset of data you're interested in, e.g. /api/mini-customer vs /api/customer.
You can then either define two separate models (one to represent the model in the list and one to represent the detailed view), or simply populate the original model with the subset of data and merge the extra data in at a later point.
That said, I've also seen plenty of cases such as the one you describe, where you load all data initially and just display the subset to begin with. If it's reasonable that the data will eventually be used and your page-load constraints can handle it, then this can be an acceptable approach.
With Ember-Data, it's possible to find a model instance by its id:
App.Person.find(1)
What if you want to find a model instance by another attribute, such as token. Is it possible to do something like:
App.Person.find_by(token: "ASDFGASDFASDF")
If so, should we be concerned about indexing searchable columns. How would that be done?
Your ArrayController should have a findBy method, which will return the first child element that matches your query. Alternatively you could use filterBy, which returns all elements that match.
As for indexing, that's something you might want to look at to increase performance, but that would be done on your server and depends on your setup.
On my website I'm going to provide points for some activities, similarly to stackoverflow. I would like to calculate value basing on many factors so each computation for each user will take for instance 10 SQL queries.
I was thinking about caching it:
in memcache,
in user's row in database (so that wherever I need to get user from base I easly show the points)
Storing in database seems easy but on other hand it's redundant information and I decided to ask, since maybe there is easier and prettier solution which I missed.
I'd highly recommend this app for storing the calculated values in the model: https://github.com/initcrash/django-denorm
Memcache is faster than the db... but if you already have to retrieve the record from the db anyway, having the calculated values cached in the rows you're retrieving (as a 'denormalised' field) is even faster, plus it's persistent.