Changing one value of row to column [closed] - powerbi

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I am working on a Power BI report. There are two dimensions DimWorkedClass and DimWorkedService. (The above snippet is obtained by exporting matrix values to csv.)
The requirement is to transform only the Worked Service Text5 into the Worked Class of Text5 as opposed to A (which is the current value).
It can be transformed at the backend, but is there any way to do it in Power BI?

This is trickier than it might appear, but it looks like this question has already been answered here:
Power Query Transform a Column based on Another Column
In your case, the M code would look something like this:
= Table.FromRecords(Table.TransformRows(#"[Source or Previous Step Here]",
(here) => Record.TransformFields(here, {"Worked Class",
each if here[Worked Service] = "text5" then "text5" else here[Worked Class]})))
(In the above, here represents the current row.)
Another answer points out a slightly cleaner way of doing this:
= Table.ReplaceValue(#"[Source or Previous Step Here]",
each [Worked Class],
each if [Worked Service] = "text5" then "text5" else [Worked Class],
Replacer.ReplaceText,{"Worked Class"})

Related

the item that repeated a specified number of times in spark (scala) [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 months ago.
Improve this question
I want to know the items that repeated a specified number of times in Spark (Scala).
With an RDD like this:
rdd = [text1,text2,text3,text4,text2,text4,text1,text1]
if the time = 2 the output should be [text2,text4].
Say you have an RDD that has been created like this:
val df: RDD[String] = spark.sparkContext.parallelize(Seq(
"text1", "text2", "text3", "text1", "text2", "text4"
))
You can use countByValue followed by a filter and keys, where 2 is your time value:
df.countByValue().filter(tuple => tuple._2 == 2).keys
If we do a println, we ge the following output:
[text1, text2]
Hope this is what you want, good luck!

Extracting Irregular Dates Associated with Specific Words in Single Cell with Multiple Lines in Excel [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
This is my first post on here, so please excuse any mistakes.
I have a column of cells. Each cell contains a variable number of lines withing the cell. Most lines contain a date. The format of the date varies slightly. Sometimes it is in the format MM/DD/YYYY, sometimes it will be MM/DD/YY, etc. My goal is to extract the date associated with a specific word in each line. Also, each cell is on a row with an identifying number. Therefore, I need the output to be along the same row.
Example:
I have tried every extract date formula I can find and I have run into three problems:
how to pull multiple dates from the cell,
how to compensate for the fact that some rows have dates that are formatted differently, and
how to pull dates only associated with certain words on the same line as the date.
It appears that my best option would be to use Regular Expressions. However, I have just started playing around with VBA and every function I have found that seems related to my issue I have been unable to adapt to my specific problem. I was using this post as a guide to build my function initially, but I cannot get it to work: Extracting Multiple Dates from a single cell
Originally, I tried breaking the lines up by doing text to column and this formula:
=IF(SEARCH("Red",D2),DATE(MID(D2,SEARCH("??/??/20??",D2)+6,4),MID(D2,SEARCH("??/??/20??",D2),2),MID(D2,SEARCH("??/??/20??",D2)+3,2)), "No Red Date")
However, text to column was not working because of irregular spacing issues. And Blue 1 and Blue 2 is just there to compensate for if there are multiple Blue dates in the cell, which there often are
NOT AN ANSWER : It doesn't really need code, you can use MID FIND SUBSTITUTE quickly playing, I used the following
=IF(FIND(C$1,$B2,1)-11<11,MID($B2,1,10),MID($B2,FIND(C$1,$B2,1)-11,10))
Which gives this,

ABAP free internal table [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
The answer to the question below is given as 2. Why does refresh delete only the first row? Is it not expected that it deletes all rows of an internal table?
What will be output by the following code?
DATA: BEGIN OF itab OCCURS 0, fval type i, END OF itab.
itab­-fval = 1. APPEND itab.
itab­-fval = 2. APPEND itab.
REFRESH itab.
WRITE: /1 itab­-fval.
A: 1
B: 2
C: blank
D: 0
Answer: B
If the code did not contain any syntax errors, e.g. the missing '-' when assigning the value 2 and when writing the value, then B is the correct answer but not for the reason you state. It is not that the REFRESH only removes the first line from the table, it is because REFRESH does not clear the header line of the table. So after the REFRESH the header line still has the latest assigned value which is 2. This can be easily ascertained when running the program in the debugger.
Note that the use of internal table with header lines is obsolete, as mentioned in SAP help.
You can use a clear command to clear the header line.
REFRESH itab.
CLEAR itab.

Weather data scraping and extraction in R [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on a research project and am assigned to do a bit of data scraping and writing code in R that can help extract current temperature for a particular zip code from a site such as wunderground.com. Now this may be a bit of an abstract question but does anyone know how to do the following:
I can extract the current temperature of a particular zip code by doing this:
temps <- readLines("http://www.wunderground.com/q/zmw:20904.1.99999")
edit(temps)
temps //gives me the source code for the website where I can look at the line that contains the temperature
ldata <- temps[lnumber]
ldata
# then have a few gsub functions that basically extracts
# just the numerical data (57.8 for example) from that line of code
I have a cvs file that contains zip code of every city in the country and I have that imported in R. It is arranged in a table according to zip, city and state. My challenge now is to write a method (using java analogy here because I'm new to R) that basically extracts 6-7 consecutive zip codes (after a particular one specified) and runs the above code by modifying the link within the readLines function and putting in the respective zip code after the link segment zmw:XXXXX and running everything after that based on that link. Now I don't quite know how to extract the data from the table. Maybe with a for-loop function? But then I don't know how to use that to modify the link. I think that's where I'm really getting stuck on. I have a bit of Java background so I understand HOW to approach this problem, just not the knowledge of the syntax. I understand this is quite an abstract question as I didn't provide a lot of code but I just want to know they functions/syntax that will help me extract the data from the table and somehow use that to modify the link through a function rather than manually doing it.
So this is about the Weather Underground data.
You can download csv files from individual weather stations in wunderground, however you need to know the weather station identifier. Here is an example URL for a weather station in Kirkland, WA (KWAKIRKL8):
http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=KWAKIRKL8&day=31&month=1&year=2014&graphspan=day&format=1
Here is some R code:
url <- 'http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=KWAKIRKL8&day=31&month=1&year=2014&graphspan=day&format=1'
s <- getURL(url)
s <- gsub("<br>\n","",s)
wdf <- read.csv(con<-textConnection(s))
And here is a page with which you can manually find stations and their codes.
http://www.wunderground.com/wundermap/
Since you only need a few you can pick them out manually.

DataMining / Analyzing responses to Multiple Choice Questions in a survey [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a set of training data consisting of 20 multiple choice questions (A/B/C/D) answered by a hundred respondents. The answers are purely categorical and cannot be scaled to numerical values. 50 of these respondents were selected for free product trial. The selection process is not known. What interesting knowledge can be mined from this information?
The following is a list of what I have come up with so far-
A study of percentages (Example - Percentage of people who answered B on Qs.5 and got selected for free product trial)
Conditional probabilities (Example - What is the probability that a person will get selected for free product trial given that he answered B on Qs.5)
Naive Bayesian classifier (This can be used to predict whether a person will be selected or not for a given set of values for any subset of questions).
Can you think of any other interesting analysis or data-mining activities that can be performed?
The usual suspects like correlation can be eliminated as the response is not quantifiable/scoreable.
Is my approach correct?
It is kind of reverse engineering.
For each respondent, you have 20 answers and one label, which indicates whether this respondent gets the product trial or not.
You want to know which of the 20 questions are critical to give trial or not decision. I'd suggest you first build a decision tree model on the training data. And study the tree carefully to get some insights, e.g. the low level decision nodes contain most discriminant questions.
The answers can be made numeric for analysis purposes, example:
RespondentID IsSelected Q1AnsA Q1AnsB Q1AnsC Q1AnsD Q2AnsA...
12345 1 0 0 1 0 0
Use association analysis to see if there are patterns in the answers.
Q3AnsC + Q8AnsB -> IsSelected
Use classification (such as logistic regression or a decision tree) to model how users are selected.
Use clustering. Are there distinct groups of respondents? In what ways are they different? Use the "elbow" or scree method to determine the number of clusters.
Do you have other info about the respondents, such as demographics? Pivot table would be good in that case.
Is there missing data? Are there patterns in the way that people skipped questions?