Keep only newest records using DQL - doctrine-orm

I have a symfony app with doctrine. There is a table like:
+--------+---------------------+-------+
| user | log_date | foo |
+---------+---------------------+-------+
| john | 2018-03-20 22:59:18 | 58 |
| kyle | 2018-04-11 13:45:02 | 22 |
| paul | 2018-11-08 22:19:16 | 41 |
| kyle | 2018-08-14 09:39:26 | 19 |
| fred | 2018-03-28 06:08:31 | 24 |
| john | 2018-01-21 11:52:17 | 81 |
| ... | ... | ... |
+---------+---------------------+-------+
A cron should execute a symfony command to delete all records but keep the latest 10 of every user. Can this be done using DQL or do I have to use an SQL (sub-)query?

I think something like this in entity repository can get all the entries for the user except the last 10
public function getAllExceptLatest($user)
{
return $this
->createQueryBuilder('t')
->andWhere('t.logDate <= :logDate')
->orderBy('t.logDate', 'DESC')
->setParameter(':logDate', $this->getLatestDate($user))
->setFirstResult(10)
->getQuery()
->execute();
}
public function getLatestDate($user)
{
return $this->createQueryBuilder('e')
->select('MAX(e.logDate)')
->andWhere('e.user = :user')
->setParameter(':user', $user)
->getQuery()
->getSingleScalarResult();
}
And in controller you can use
public function keepLatest(){
$em = $this->getDoctrine()->getManager();
$userRepo = $em->getRepository(User::class);
$users = $userRepo->findAll();
foreach ($users as $u) {
$records = $userRepo->getAllExceptLatest($u);
foreach ($records as $r)
$em->remove($r);
}
$em->flush();
}
I didn't test this, but in mine apps similar methods works fine

Related

Django one column values to one column concat value using annotate Subquery returns more than 1 row

hellow my models see the blow
class IP(models.Model):
subnet = models.ForeignKey(Subnet,verbose_name="SUBNET",on_delete=models.CASCADE,related_name="ip_set")
ip = models.GenericIPAddressField(verbose_name="IP",protocol="both",unpack_ipv4=True,unique=True)
asset = models.ManyToManyField(Asset,verbose_name="HOSTNAME",through="AssetIP",related_name="ip_set",blank=True,)
description = models.CharField(verbose_name="DESCRIPTION",max_length=50,default="",null=True,blank=True)
class AssetIP(models.Model):
TYPE_CHOICES = [
("GATEWAY-IP", "GATEWAY-IP"),
("MGT-IP", "MGT-IP"),
("PRIMARY-IP", "PRIMARY-IP"),
("OTHER-IP", "OTHER-IP"),
]
ip_type = models.CharField(verbose_name="IP TYPE",max_length=30,choices=TYPE_CHOICES)
ip = models.ForeignKey(IP,verbose_name="IP",on_delete=models.CASCADE,related_name="asset_ip_set")
asset = models.ForeignKey(Asset,verbose_name="HOSTNAME",on_delete=models.CASCADE,related_name="asset_ip_set")
class Asset(models.Model):
barcode = models.CharField(verbose_name="Barcode",max_length=60,blank=True,null=True,unique=True)
hostname= models.CharField(verbose_name="Hostname",max_length=30)
so in this model data is blow
IP Model
| IP | Asset | Description |
|:---- |:------:| -----:|
| 10.10.10.2 | A_HOST,B_HOST,C_HOST | - |
| 10.10.10.3 | A_HOST,B_HOST | - |
| 10.10.10.4 | A_HOST | - |
| 10.10.10.5 | A_HOST | - |
AssetIP through Model
| IP | Asset | IP_TYPE |
|:---- |:------:| -----:|
| 10.10.10.2 | A_HOST | OTHER-IP |
| 10.10.10.2 | B_HOST | OTHER-IP |
| 10.10.10.2 | C_HOST | OTHER-IP |
| 10.10.10.3 | A_HOST | OTHER-IP |
| 10.10.10.4 | A_HOST | OTHER-IP |
| 10.10.10.5 | A_HOST | PRIMARY-IP |
So Asset Query Result in this
Result = Asset.objects.all()
in this result Field
Asset = {
barcode: "ddd",
hostname: "A_HOST",
}
I Want Field and Result
Asset = {
barcode: "ddd",
hostname: "A_HOST",
primary_ip : "10.10.10.5",
other_ip : "10.10.10.2, 10.10.10.3, 10.10.10.4"
}
I Try the this query in this queryset is not filtering "OHTER-IP"
assets = Asset.objects.annotate(other_ips=GroupConcat('asset_ip_set__ip__ip'))
assets[0].other_ips
result : '10.10.10.2,10.10.10.3,10.10.10.4,10.10.10.5'
and try to this queryset
filtered_ips = AssetIP.objects.filter(asset=OuterRef('pk'), ip_type="OTHER-IP").values_list('ip__ip', flat=True)
Asset.objects.filter(asset_ip_set__ip_type="OTHER-IP").annotate(
other_ips=GroupConcat(
Subquery(filtered_ips),
delimiter=', '
)
)
result : django.db.utils.OperationalError: (1242, 'Subquery returns more than 1 row')
Help me....

Power BI - M query to join records matching range

I have an import query (table a) and an imported Excel file (table b) with records I am trying to match it up with.
I am looking for a method to replicate this type of SQL in M:
SELECT a.loc_id, a.other_data, b.stk
FROM a INNER JOIN b on a.loc_id BETWEEN b.from_loc AND b.to_loc
Table A
| loc_id | other data |
-------------------------
| 34A032B1 | ... |
| 34A3Z011 | ... |
| 3DD23A41 | ... |
Table B
| stk | from_loc | to_loc |
--------------------------------
| STKA01 | 34A01 | 34A30ZZZ |
| STKA02 | 34A31 | 34A50ZZZ |
| ... | ... | ... |
Goal
| loc_id | other data | stk |
----------------------------------
| 34A032B1 | ... | STKA01 |
| 34A3Z011 | ... | STKA02 |
| 3DD23A41 | ... | STKD01 |
All of the other queries I can find along these lines use numbers, dates, or times in the BETWEEN clause, and seem to work by exploding the (from, to) range into all possible values and then filtering out the extra rows. However I need to use string comparisons, and exploding those into all possible values would be unfeasable.
Between all the various solutions I could find, the closest I've come is to add a custom column on table a:
Table.SelectRows(
table_b,
(a) => Value.Compare([loc_id], table_b[from_loc]) = 1
and Value.Compare([loc_id], table_b[to_loc]) = -1
)
This does return all the columns from table_b, however, when expanding the column, the values are all null.
This is not very specific "After 34A01 could be any string..." in trying to figure out how your series progresses.
But maybe you can just test for how a value "sorts" using the native sorting function in PQ.
add custom column with table.Select Rows:
= try Table.SelectRows(TableB, (t)=> t[from_loc]<=[loc_id] and t[to_loc] >= [loc_id])[stk]{0} otherwise null
To reproduce with your examples:
let
TableB=Table.FromColumns(
{{"STKA01","STKA02"},
{"34A01","34A31"},
{"34A30ZZZ","34A50ZZZ"}},
type table[stk=text,from_loc=text,to_loc=text]),
TableA=Table.FromColumns(
{{"34A032B1","34A3Z011","3DD23A41"},
{"...","...","..."}},
type table[loc_id=text, other data=text]),
//determine where it sorts and return the stk
#"Added Custom" = Table.AddColumn(#"TableA", "stk", each
try Table.SelectRows(TableB, (t)=> t[from_loc]<=[loc_id] and t[to_loc] >= [loc_id])[stk]{0} otherwise null)
in
#"Added Custom"
Note: if the above algorithm is too slow, there may be faster methods of obtaining these results

Mockito verifying method invocation without using equals method

While using Spock i can do something like this:
when:
12.times {mailSender.send("blabla", "subject", "content")}
then:
12 * javaMailSender.send(_)
When i tried to do same in Mockito:
verify(javaMailSender,times(12)).send(any(SimpleMailMessage.class))
I got an error that SimpleMailMessage has null values, so i had to initialize it in test:
SimpleMailMessage simpleMailMessage = new SimpleMailMessage()
simpleMailMessage.setTo("blablabla")
simpleMailMessage.subject = "subject"
simpleMailMessage.text = "content"
verify(javaMailSender,times(12)).send(simpleMailMessage))
Now it works but it's a large workload and i really don't care about equality. What if SimpleMailMessage will have much more arguments or another objects with another arguments, meh. Is there any way to check that send method was just called X times?
EDIT: added implementation of send method.
private fun sendEmail(recipient: String, subject: String, content: String)
{
val mailMessage = SimpleMailMessage()
mailMessage.setTo(recipient)
mailMessage.subject = subject
mailMessage.text = content
javaMailSender.send(mailMessage)
}
There are 2 senders, mailSender is my custom object and javaMailSender is from another libary
Stacktrace:
Mockito.verify(javaMailSender,
Mockito.times(2)).send(Mockito.any(SimpleMailMessage.class))
| | | | |
| | | | null
| | | Wanted but not invoked:
| | | javaMailSender.send(
| | | <any org.springframework.mail.SimpleMailMessage>
| | | );
| | | -> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| | |
| | | However, there were exactly 2 interactions with this mock:
| | | javaMailSender.send(
| | | SimpleMailMessage: from=null; replyTo=null; to=blabla; cc=; bcc=; sentDate=null; subject=subject; text=content
| | | );
| | | -> at MailSenderServiceImpl.sendEmail(MailSenderServiceImpl.kt:42)
| | |
| | | javaMailSender.send(
| | | SimpleMailMessage: from=null; replyTo=null; to=blabla; cc=; bcc=; sentDate=null; subject=subject; text=content
| | | );
If you don't care for the parameter of send, leave any() empty:
verify(javaMailSender,times(12)).send(any())

Building dojo generate multiple files

I'm using the building tool of dojo to generate a single file dojo.js, but I don't know why I'm getting multiple files.
This is my example profile:
var profile = (function(){
return {
basePath: "../../../",
releaseDir: "./app",
releaseName: "lib",
action: "release",
layerOptimize: "closure",
optimize: "closure",
mini: true,
stripConsole: "warn",
selectorEngine: "lite",
defaultConfig: {
            hasCache:{
                "dojo-built": 1,
                "dojo-loader": 1,
                "dom": 1,
                "host-browser": 1,
                "config-selectorEngine": "lite"
            },
            async: 1
        },
 
        staticHasFeatures: {
'dojo-trace-api': 0,
'dojo-log-api': 0,
'dojo-publish-privates': 0,
'dojo-sync-loader': 0,
'dojo-test-sniff': 0
},
 
        packages:['dojo'],
 
        layers: {
            "dojo/dojo": {
                include: ["dojo/domReady"],
                customBase: true,
                boot: true
            }
        }
    };
})();
This is my .bat:
./util/buildscripts/build profile=cgl-dojo
After execute it, this is the release folder:
app
\---lib
\---dojo
+---cldr
| \---nls
| +---ar
| +---ca
| +---cs
| +---da
| +---de
| +---el
| +---en
| +---en-au
| +---en-ca
| +---en-gb
| +---es
| +---fi
| +---fr
| +---fr-ch
| +---he
| +---hu
| +---it
| +---ja
| +---ko
| +---nb
| +---nl
| +---pl
| +---pt
| +---pt-pt
| +---ro
| +---ru
| +---sk
| +---sl
| +---sv
| +---th
| +---tr
| +---zh
| +---zh-hant
| +---zh-hk
| \---zh-tw
+---data
| +---api
| \---util
+---date
+---dnd
+---errors
+---fx
+---io
+---nls
| +---ar
| +---az
| +---bg
| +---ca
| +---cs
| +---da
| +---de
| +---el
| +---es
| +---fi
| +---fr
| +---he
| +---hr
| +---hu
| +---it
| +---ja
| +---kk
| +---ko
| +---nb
| +---nl
| +---pl
| +---pt
| +---pt-pt
| +---ro
| +---ru
| +---sk
| +---sl
| +---sv
| +---th
| +---tr
| +---uk
| +---zh
| \---zh-tw
+---promise
+---request
+---resources
| \---images
+---router
+---rpc
+---selector
+---store
| +---api
| \---util
+---_base
\---_firebug
I need a release folder with only one file, please help me.
The entire tree of registered packages is always built because the build tool has no way of knowing whether or not you are conditionally requiring other modules within your application. There is no way to make the build system only output one file, and in fact a single file is a bad idea because each locale has its own set of localisation rules. If you want to reduce the number of files after a build, you can just delete all the ones you don’t want.

CouchDB: Group data by key

I have following data in my database:
| value1 | value2 |
|----------+----------|
| 1 | a |
| 1 | b |
| 2 | a |
| 3 | c |
| 3 | d |
|----------+----------|
What I want as a output is {"key":1,"value":[a,b]},{"key":2,"value":[a]},{"key":3,"value":[c,d]}
I wrote this map function (but not quiet sure if this is correct)
function(doc) {
emit(doc.value1,doc.value2);
}
...but I am missing the reduce-function. Thanks for your help!
Not sure if this can/should be done with a reduce function.
However, you can reformat the output with lists. Try the following list function:
function (head, req) {
var row,
returnObj = {};
while (row = getRow()) {
if(returnObj[row.key]){
returnObj[row.key].push(row.value);
}else{
returnObj[row.key] = [row.value];
}
}
send(JSON.stringify(returnObj));
};
The output should look like this:
{
1: [
"a",
"b"
],
2: [
"a"
],
3: [
"c",
"d"
]
}
Hope that helps.