I have a DataFrame with 2 columns. Column 1 is "code" which can repeat more than 1 time and column 2 which is "Values". For example, column 1 is 1,1,1,5,5 and Column 2 is 15,18,24,38,41. What I want to do is first sort by the 2 columns ( df.sort("code","Values") ) and then do a ("groupBy" "Code") and (agg Values) but I want to apply a UDF on values so I need to pass the "Values" of each code as a "list" to the UDF. I am not sure how many "Values" each Code will have. As you can see in this example "Code" 1 has 3 values and "Code" 5 has 2 Values. So for each "Code" I need to pass all the "Values" of that "Code" as a list to the UDF.
You can do a groupBy and then use the collect_set or collect_list function in pyspark. Below is an example dataframe of your use case (I hope this is what are you referring to ):
from pyspark import SparkContext
from pyspark.sql import HiveContext
sc = SparkContext("local")
sqlContext = HiveContext(sc)
df = sqlContext.createDataFrame([
("code1", "val1"),
("code1", "val2"),
("code1", "val3"),
("code2", "val1"),
("code2", "val2"),
], ["code", "val"])
df.show()
+-----+-----+
| code| val |
+-----+-----+
|code1|val1 |
|code1|val2 |
|code1|val3 |
|code2|val1 |
|code2|val2 |
+---+-------+
Now the groupBy and collect_list command:
(df
.groupby("code")
.agg(F.collect_list("val"))
.show())
Output:
+------+------------------+
|code |collect_list(val) |
+------+------------------+
|code1 |[val1, val2, val3]|
|code2 |[val1, val2] |
+------+------------------+
Here above you get list of aggregated values in second column
Related
I have two columns in a pandas DataFrame, both containing also a lot of null values. Some values in column B, exist partially in a field (or multiple fields) in columns A. I want to check if this value of B exists in A, and if so, seperate this value and add as a new row in column A
Example:
Column A | Column B
black bear | null
black box | null
red fox | null
red fire | null
green tree | null
null | red
null | yellow
null | black
null | red
null | green
And I want the following:
Column A
black
bear
box
red
fire
fox
yellow
green
Does anyone have any tips on how to get this result? I have tried using regex (re.match), but I am struggling with the fact that I do not have a fixed pattern but a variable (namely, any value in column B) This is my effort:
import re
list_A= df['Column A'].values.tolist()
list_B= df['Column B'].values.tolist()
for i in list_A:
for j in list_B:
if i != None:
if re.match('{j}.+', i) :
...
Note: the columns are over 2500 rows long.
If I understand your question correctly that you want to split the value of b from the value in a whenever b is found in a, and then stored the separated values separately, then how about trying the following?
import re
list_A = df['Column A'].values.tolist()
list_B = df['Column B'].values.tolist()
list_of_separated_values = []
for a in list_a:
for b in list_b:
if b in a:
list_of_separated_values.extend([val for val in re.split('({})'.format(b),a) if not val])
This is not a regex question. You have your data in a dataframe, use the dataframe functionality to fix it.
Assuming data_frame is your pandas DataFrame.
# filter the DataFrame to just those with `null` in Column A
filtered = data_frame[data_frame["Column A"].isnull()]
# in the filtered table, assign Column B to Column A
filtered["Column A"] = filtered["Column B"]
# set Column B to null/None (I'm assuming you want this or this step can be skipped)
filtered["Column B"] = None
print(data_frame)
I have table in DB like this:
MyTableWithValues
id | user(fk to Users) | value(fk to Values) | text | something1 | something2 ...
1 | userobject1 | valueobject1 |asdasdasdasd| 123 | 12321
2 | userobject2 | valueobject50 |QWQWQWQWQWQW| 515 | 5555455
3 | userobject1 | valueobject1 |asdasdasdasd| 12345 | 123213
I need to delete all objects where are repeated fields user, value and text, but save one from them. In this example will be deleted 3rd record.
How can I do this, using Django ORM?
PS:
try this:
recs = (
MyTableWithValues.objects
.order_by()
.annotate(max_id=Max('id'), count_id=Count('user__id'))
#.filter(count_id__gt=1)
.annotate(count_values=Count('values'))
#.filter(count_icd__gt=1)
)
...
...
for r in recs:
print(r.id, r.count_id, , r.count_values)
it prints something like this:
1 1 1
2 1 1
3 1 1
...
Dispite the fact, that in database there are duplicated values. I cant understand, why Count function does not work.
Can anybody help me?
You should first be aware of how count works.
The Count method will count for identical rows.
It uses all the fields available in an object to check if it is identical with fields of other rows or not.
So in current situation the count_values is resulting 1 because Count is using all fields excluding id to look for similar rows.
Count is including user,value,text,something1,something2 fields to check for similarity.
To count rows with similar fields you have to use only user,values & text field
Query:
recs = MyTableWithValues.objects
.values('user','values','text')
.annotate(max_id=Max('id'),count_id=Count('user__id'))
.annotate(count_values=Count('values'))
It will return a list of dictionary
print(recs)
Output:
<QuerySet[{'user':1,'values':1,'text':'asdasdasdasd','max_id':3,'count_id':2,'count_values':2},{'user':2,'values':2,'text':'QWQWQWQWQWQW','max_id':2,'count_id':1,'count_values':1}]
using this queryset you can check how many times a row contains user,values & text field with same values
Would a Python loop work for you?
import collections
d = collections.defaultdict(list)
# group all objects by the key
for e in MyTableWithValues.objects.all():
k = (e.user_id, e.value_id, e.text)
d[k].append(e)
for k, obj_list in d.items():
if len(obj_list) > 1:
for e in obj_list[1:]:
# except the first one, delete all objects
e.delete()
I need help to find the unique partitions column names for a Hive table using PySpark. The table might have multiple partition columns and preferable the output should return a list of the partition columns for the Hive Table.
It would be great if the result would also include the datatype of the partitioned columns.
Any suggestions will be helpful.
It can be done using desc as shown below:
df=spark.sql("""desc test_dev_db.partition_date_table""")
>>> df.show(truncate=False)
+-----------------------+---------+-------+
|col_name |data_type|comment|
+-----------------------+---------+-------+
|emp_id |int |null |
|emp_name |string |null |
|emp_salary |int |null |
|emp_date |date |null |
|year |string |null |
|month |string |null |
|day |string |null |
|# Partition Information| | |
|# col_name |data_type|comment|
|year |string |null |
|month |string |null |
|day |string |null |
+-----------------------+---------+-------+
Since this table was partitioned, So here you can see the partition column information along with their datatypes.
It seems your are interested in just partition column name and their respective data types. Hence I am creating a list of tuples.
partition_list=df.select(df.col_name,df.data_type).rdd.map(lambda x:(x[0],x[1])).collect()
>>> print partition_list
[(u'emp_id', u'int'), (u'emp_name', u'string'), (u'emp_salary', u'int'), (u'emp_date', u'date'), (u'year', u'string'), (u'month', u'string'), (u'day', u'string'), (u'# Partition Information', u''), (u'# col_name', u'data_type'), (u'year', u'string'), (u'month', u'string'), (u'day', u'string')]
partition_details = [partition_list[index+1:] for index,item in enumerate(partition_list) if item[0]=='# col_name']
>>> print partition_details
[[(u'year', u'string'), (u'month', u'string'), (u'day', u'string')]]
It will return empty list in case table is not partitioned. Hope this helps.
The following snippet
Gets the columns for the given table
Filters out partition columns
Extracts (name, datatype) tuples from the partition columns
# s: pyspark.sql.session.SparkSession
# table: str
# 1. Get table columns for given table
columns = s.catalog.listColumns(table)
# 2. Filter out partition columns
partition_columns = list(filter(lambda c: c.isPartition , columns))
# 3. Now you can extract the name and dataType (among other attributes)
[ (c.name, c.dataType) for c in partition_columns ]
Another simple method through pyspark script .
from pyspark.sql.types import *
import pyspark.sql.functions as f
from pyspark.sql import functions as F
from pyspark.sql.functions import col, concat, lit
descschema = StructType([ StructField("col_name", StringType())
,StructField("data_type", StringType())
,StructField("comment", StringType())])
df = spark.sql(f"describe formatted serve.cust_transactions" )
df2=df.where((f.col("col_name")== 'Part 0') | (f.col("col_name")== 'Part 2') | (f.col("col_name")== 'Name')).select(f.col('data_type'))
df3 =df2.toPandas().transpose()
display(df3)
Result would be :
I have created a dataframe in PySpark from a csv file with data with columns in the following format:
+---+--------------+-------------+
| ID| FileID| TestID|
+---+--------------+-------------+
| 1| HD_Fly_456_34|Gone_YT_78_67|
| 2|FG_Home_567_54|Gone_YT_78_22|
| 3| GD_Go_678_87|Gone_YT_06_82|
| 4| GH_Buy_908_45|Gone_YT_92_70|
| 5| HJ_Get_789_65|Gone_YT_98_43|
+---+--------------+-------------+
I used the following lines of code to create a dataframe:
df=sqlc.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load("testfile.csv")
I need to split the elements of the columns FileID, TestID and so on at the _ (underscore) so that they can be stored in a new column.
I am using the following method:
df.join(df['FileID'].str.split('_', 1, expand=True).rename(columns={0:'R', 1:'R1',2:'R2',3:'R3'}))
I get the following error:
df.join(df['FileID'].str.split('_', 1, expand=True).rename(columns={0:'R', 1:'R1',2:'R2',3:'R3'}))
TypeError: 'Column' object is not callable
How do I get to where I need to be?
While sometimes similar, PySpark is not the same as Pandas.
I'd use split:
from pyspark.sql.functions import split, col
parts = split("TestID", "_")
df.select(
parts[0].alias("R"), parts[1].alias("R1"),
parts[2].alias("R2"), parts[3].alias("R3"))
How can I use collect_set or collect_list on a dataframe after groupby. for example: df.groupby('key').collect_set('values'). I get an error: AttributeError: 'GroupedData' object has no attribute 'collect_set'
You need to use agg. Example:
from pyspark import SparkContext
from pyspark.sql import HiveContext
from pyspark.sql import functions as F
sc = SparkContext("local")
sqlContext = HiveContext(sc)
df = sqlContext.createDataFrame([
("a", None, None),
("a", "code1", None),
("a", "code2", "name2"),
], ["id", "code", "name"])
df.show()
+---+-----+-----+
| id| code| name|
+---+-----+-----+
| a| null| null|
| a|code1| null|
| a|code2|name2|
+---+-----+-----+
Note in the above you have to create a HiveContext. See https://stackoverflow.com/a/35529093/690430 for dealing with different Spark versions.
(df
.groupby("id")
.agg(F.collect_set("code"),
F.collect_list("name"))
.show())
+---+-----------------+------------------+
| id|collect_set(code)|collect_list(name)|
+---+-----------------+------------------+
| a| [code1, code2]| [name2]|
+---+-----------------+------------------+
If your dataframe is large, you can try using pandas udf(GROUPED_AGG) to avoid memory error. It is also much faster.
Grouped aggregate Pandas UDFs are similar to Spark aggregate functions. Grouped aggregate Pandas UDFs are used with groupBy().agg() and pyspark.sql.Window. It defines an aggregation from one or more pandas.Series to a scalar value, where each pandas.Series represents a column within the group or window. pandas udf
example:
import pyspark.sql.functions as F
#F.pandas_udf('string', F.PandasUDFType.GROUPED_AGG)
def collect_list(name):
return ', '.join(name)
grouped_df = df.groupby('id').agg(collect_list(df["name"]).alias('names'))