I am creating a listview with 5 columns in Gtk+ using C++. I was able to do that. But the problem is, I need subcolumns for the 2nd column which I'm not sure how to proceed.
firstcolumn | second column | third |
|SC1 | SC2 | SC3| |
| | | | |
Is this possible? Can you suggest how to go about it?
Related
According to my assignment admin must be able to create Polls with Questions (create, delete, update) and Choices related to this questions. All of this should be displayed and changable on the same admin page.
Poll
|
|_question_1
| |
| |_choice_1(text)
| |
| |_choice_2
| |
| |_choice_3
|
|_question_2
| |
| |_choice_1
| |
| |_choice_2
| |
| |_choice_3
|
|_question_3
|
|_choice_1
|
|_choice_2
|
|_choice_3
Ok, it's not a problem to display one level of nesting like so on
class QuestionInline(admin.StackedInline):
model = Question
class PollAdmin(ModelAdmin):
inlines = [
QuestionInline,
]
But how to do to get the required poll design structure?
Check out this library it should provide the functionality.
The problem is the following:
Let's consider 2 sets of images embedded in a PowerBI file .pbix:
Set 1: Img1, Img2
Set 2: Img3, Img4, Img5, Img6
For each combination of 2 images between the two sets, we want to set an action (to a bookmark) to display a corresponding visual.
So here, we have these combinations:
(Img1, Img3) -> display Viz1.3
(Img1, Img4) -> display Viz1.4
(Img1, Img5) -> display Viz1.5
(Img1, Img6) -> display Viz1.6
(Img2, Img3) -> display Viz2.3
(Img2, Img4) -> display Viz2.4
(Img2, Img5) -> display Viz2.5
(Img2, Img6) -> display Viz2.6
Let's suppose that we have selected (Img1, Img3) and then we select (click) Img2.
How can we keep the selection on Img3 in order to display Viz2.3 ?
Marco
Each image can, I'm pretty sure, only trigger one bookmark. If I'm reading right, I think you need to duplicate the images and hide/show according to the state of the display.
For example in the example above, if the display is Viz1.3 and you click image 2, the updates would be to the viz and to the images in set 2:
Viz1.3 -> Viz2.3
Img3.1 -> Img3.2
Img4.1 -> Img4.2
Img5.1 -> Img5.2
Img6.1 -> Img6.2
And if you updated from there by clicking Img4 to get to Viz2.4, you'd have to update the images in set one
Viz2.3 -> Viz2.4
Img1.3 -> Img1.4
Img2.3 -> Img2.4
So, ultimately, you'd need four version of each image in set one (one for each in set two) and two of each in set two (to match set one). And though you're only showing 6 at any given time, that's 16 which is twice as many as you have visuals. But if you have to have it that way for UX, then I can't think of another way to do it.
| Image | Bookmark |
|-----------|--------------|
| Img1.3 | Viz1.3 |
| Img1.4 | Viz1.4 |
| Img1.5 | Viz1.5 |
| Img1.6 | Viz1.6 |
| Img2.3 | Viz2.3 |
| Img2.4 | Viz2.4 |
| Img2.5 | Viz2.5 |
| Img2.6 | Viz2.6 |
| Img3.1 | Viz1.3 |
| Img3.2 | Viz2.3 |
| Img4.1 | Viz1.4 |
| Img4.2 | Viz2.4 |
| Img5.1 | Viz1.5 |
| Img5.2 | Viz2.5 |
| Img6.1 | Viz1.6 |
| Img6.2 | Viz2.6 |
Showing or hiding bookmarks by screen state
I have a column in Google Sheets where each cell contains pre-defined logic. For example, something like the second column in this table:
| 1 | =A1*-1 |
| 2 | =B2*-1 |
| -3 | =C2*-1 |
Let's say later I want to add the same logic to each cell in column B. For example, make it such that it looks like:
| 1 | =MAX(A1*-1,0) |
| 2 | =MAX(B2*-1,0) |
| -3 | =MAX(C2*-1,0) |
What is the fastest way to do this, besides manually typing MAX(...,0) in each cell? Normal Sheets functions act on the value of the cell, not the logic, so I'm a bit lost.
To my knowledge there isn't a function that pipes in the logic from one cell to another ...
try:
=ARRAYFORMULA(IF(A1:A="",,IF(SIGN(A1:A)<0, A1:A*-1, 0)))
=ARRAYFORMULA(IF(A1:A="",,IF(SIGN(A1:A)>0, A1:A, 0)))
I have a group of people who started receiving a specific type of social benefit called benefitA, I am interested in knowing what(if any) social benefits the people in the group might have received immediately before they started receiving BenefitA.
My optimal result would be a table with the number people who was receiving respectively BenefitB, BenefitC and not receiving any benefit “BenefitNon” immediately before they started receiving BenefitA.
My data is organized as a relation database with a Facttabel containing an ID for each person in my data and several dimension tables connected to the facttabel. The important ones here at DimDreamYdelse(showing type of benefit received), DimDreamTid(showing week and year). Here is an example of the raw data.
Data Example
I'm not sure how to approach this in PowerBi as I am fairly new to this program. Any advice is most welcome.
I have tried to solve the problem in SQL but as I need this as part of a running report i need to do it in PowerBi. This bit of code might however give some context to what I want to do.
USE FLISDATA_Beskaeftigelse;
SELECT dbo.FactDream.DimDreamTid , dbo.FactDream.DimDreamBenefit , dbo.DimDreamTid.Aar, dbo.DimDreamTid.UgeIAar, dbo.DimDreamBenefit.Benefit,
FROM dbo.FactDream INNER JOIN
dbo.DimDreamTid ON dbo.FactDream.DimDreamTid = dbo.DimDreamTid.DimDreamTidID INNER JOIN
dbo.DimDreamYdelse ON dbo.FactDream.DimDreamBenefit = dbo.DimDreamYdelse.DimDreamBenefitID
WHERE (dbo.DimDreamYdelse.Ydelse LIKE 'Benefit%') AND (dbo.DimDreamTid.Aar = '2019')
ORDER BY dbo.DimDreamTid.Aar, dbo.DimDreamTid.UgeIAar
I suggest to use PowerQuery to transform your table into more suitable form for your analysis. Things would be much easier if each row of the table represents the "change" of benefit plan like this.
| Person ID | Benefit From | Benefit To | Date |
|-----------|--------------|------------|------------|
| 15 | BenefitNon | BenefitA | 2019-07-01 |
| 15 | BenefitA | BenefitNon | 2019-12-01 |
| 17 | BenefitC | BenefitA | 2019-06-01 |
| 17 | BenefitA | BenefitB | 2019-08-01 |
| 17 | BenefitB | BenefitA | 2019-09-01 |
| ...
Then you can simply count the numbers by COUNTROWS(BenefitChanges) filtering/slicing with both Benefit From and Benefit To.
I figured out how to read files into my pyspark shell (and script) from an S3 directory, e.g. by using:
rdd = sc.wholeTextFiles('s3n://bucketname/dir/*')
But, while that's great in letting me read all the files in ONE directory, I want to read every single file from all of the directories.
I don't want to flatten them or load everything at once, because I will have memory issues.
Instead, I need it to automatically go load all the files from each sub-directory in a batched manner. Is that possible?
Here's my directory structure:
S3_bucket_name -> year (2016 or 2017) -> month (max 12 folders) -> day (max 31 folders) -> sub-day folders (max 30; basically just partitioned the collecting each day).
Something like this, except it'll go for all 12 months and up to 31 days...
BucketName
|
|
|---Year(2016)
| |
| |---Month(11)
| | |
| | |---Day(01)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| | |---Day(02)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| |---Month(12)
|
|---Year(2017)
| |
| |---Month(1)
| | |
| | |---Day(01)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| | |---Day(02)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| |---Month(2)
Each arrow above represents a fork. e.g. I've been collecting data for 2 years, so there are 2 years in the "year" fork. Then for each year, up to 12 months max, and then for each month, up to 31 possible day folders. And in each day, there will be up to 30 folders just because I split it up that way...
I hope that makes sense...
I was looking at another post (read files recursively from sub directories with spark from s3 or local filesystem) where I believe they suggested using wildcards, so something like:
rdd = sc.wholeTextFiles('s3n://bucketname/*/data/*/*')
But the problem with that is it tries to find a common folder among the various subdirectories - in this case there are no guarantees and I would just need everything.
However, on that line of reasoning, I thought what if I did..:
rdd = sc.wholeTextFiles("s3n://bucketname/*/*/*/*/*')
But the issue is that now I get OutOfMemory errors, probably because it's loading everything at once and freaking out.
Ideally, what I would be able to do is this:
Go to the sub-directory level of the day and read those in, so e.g.
First read in 2016/12/01, then 2016/12/02, up until 2012/12/31, and then 2017/01/01, then 2017/01/02, ... 2017/01/31 and so on.
That way, instead of using five wildcards (*) as I did above, I would somehow have it know to look trough each sub-directory at the level of "day".
I thought of using a python dictionary to specify the file path to each of the days, but that seems like a rather cumbersome approach. What I mean by that is as follows:
file_dict = {
0:'2016/12/01/*/*',
1:'2016/12/02/*/*',
...
30:'2016/12/31/*/*',
}
basically for all the folders, and then iterating through them and loading them in using something like this:
sc.wholeTextFiles('s3n://bucketname/' + file_dict[i])
But I don't want to manually type out all those paths. I hope this made sense...
EDIT:
Another way of asking the question is, how do I read the files from a nested sub-directory structure in a batched way? How can I enumerate all the possible folder names in my s3 bucket in python? Maybe that would help...
EDIT2:
The structure of the data in each of my files is as follows:
{json object 1},
{json object 2},
{json object 3},
...
{json object n},
For it to be "true json", it either just needed to be like the above without a trailing comma at the end, or something like this (note square brackets, and lack of the final trailing comma:
[
{json object 1},
{json object 2},
{json object 3},
...
{json object n}
]
The reason I did it entirely in PySpark as a script I submit is because I forced myself to handle this formatting quirk manually. If I use Hive/Athena, I am not sure how to deal with it.
Why dont you use Hive, or even better, Athena? These will both deploy tables ontop of file systems, to give you access to all the data. Then you can capture this in to Spark
Alternatively, I believe you can also use HiveQL in Spark to set up a tempTable ontop of your file system location, and it'll register it all as a Hive table which you can execute SQL against. It's been a while since I've done that, but it is definitely do-able