PySpark中UDF函数使用

         Pyspark内置函数有时不能解决全部需求,这时需要我们写一些udf来解决实际业务,Pyspark中提供了此种方法,脚本中导入 from pyspark.sql import functions as F 便可轻松实现,我这里是解决经纬问题,写的一个udf示例。(部分代码已省略)
 
#!/usr/bin/python3.6
# -*- coding: utf-8 -*-

from pyspark.sql import functions as F
from pyspark.sql import types as T

from pyspark import SparkConf
from pyspark.sql import SparkSession

conf = SparkConf()
conf.set("spark.app.name", "lbs_coordinate")

spark = SparkSession.builder.config(conf=conf).enableHiveSupport().getOrCreate()


def lbs_coordinate(data_date):
    data_all_sql = """
        SELECT
            imei
            ,coordinate
        FROM tmp.table_coordinate
    """.format(data_date=data_date)
    print(data_all_sql)
    df = spark.sql(data_all_sql)
    df_agg_coor = df.select('imei', 'itime_coordinate').groupBy('imei').agg(avg_speed_udf(F.collect_set('coordinate')).alias('value'))
    df_agg_coor.createOrReplaceTempView('agg_coor_d_avg_speed_temp')


def compute_avg_speed(coordinate_set):
    print('具体处理逻辑,传进来的参数是一个有序集合,遍历出来就可以随便操作,返回值自己定义')
    return str


avg_speed_udf = F.udf(compute_avg_speed, T.StringType())

if __name__ == "__main__":
    lbs_coordinate()

 


版权声明:本文为lquarius原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。