编程技术网

关注微信公众号,定时推送前沿、专业、深度的编程技术资料。

 找回密码
 立即注册

QQ登录

只需一步,快速开始

极客时间

尝试将数据帧行映射到更新行时出现编码器错误:Encoder error while trying to map dataframe row to updated row

chrome-devrel- spark 2022-5-7 16:39 13人围观

腾讯云服务器
尝试将数据帧行映射到更新行时出现编码器错误的处理方法

当我尝试在我的代码中做同样的事情时,如下所述

When I m trying to do the same thing in my code as mentioned below

dataframe.map(row => { val row1 = row.getAs[String](1) val make = if (row1.toLowerCase == "tesla") "S" else row1 Row(row(0),make,row(2)) }) 

我从这里获取了上述参考:Scala:如何使用 Scala 替换数据帧中的值但我收到编码器错误

I have taken the above reference from here: Scala: How can I replace value in Dataframs using scala But I am getting encoder error as

无法找到存储在数据集中的类型的编码器.原始类型(Int, String, etc) 和 Product types (case classes) 被支持导入 spark.implicits._ 支持序列化其他类型将在以后的版本中添加.

Unable to find encoder for type stored in a Dataset. Primitive types (Int, S tring, etc) and Product types (case classes) are supported by importing spark.im plicits._ Support for serializing other types will be added in future releases.

注意:我使用的是 spark 2.0!

Note: I am using spark 2.0!

问题解答

这里没有什么意外.您正在尝试使用用 Spark 1.x 编写且在 Spark 2.0 中不再支持的代码:

There is nothing unexpected here. You're trying to use code which has been written with Spark 1.x and is no longer supported in Spark 2.0:

  • 在 1.x DataFrame.map((Row) ⇒ T)(ClassTag[T]) ⇒ RDD[T]
  • 在 2.x 中 Dataset[Row].map((Row) ⇒ T)(Encoder[T]) ⇒ Dataset[T]
  • in 1.x DataFrame.map is ((Row) ⇒ T)(ClassTag[T]) ⇒ RDD[T]
  • in 2.x Dataset[Row].map is ((Row) ⇒ T)(Encoder[T]) ⇒ Dataset[T]

老实说,它在 1.x 中也没有多大意义.独立于版本,您可以简单地使用 DataFrame API:

To be honest it didn't make much sense in 1.x either. Independent of version you can simply use DataFrame API:

import org.apache.spark.sql.functions.{when, lower} val df = Seq( (2012, "Tesla", "S"), (1997, "Ford", "E350"), (2015, "Chevy", "Volt") ).toDF("year", "make", "model") df.withColumn("make", when(lower($"make") === "tesla", "S").otherwise($"make")) 

如果你真的想使用map,你应该使用静态类型的Dataset:

If you really want to use map you should use statically typed Dataset:

import spark.implicits._ case class Record(year: Int, make: String, model: String) df.as[Record].map { case tesla if tesla.make.toLowerCase == "tesla" => tesla.copy(make = "S") case rec => rec } 

或者至少返回一个具有隐式编码器的对象:

or at least return an object which will have implicit encoder:

df.map { case Row(year: Int, make: String, model: String) => (year, if(make.toLowerCase == "tesla") "S" else make, model) } 

最后,如果出于某种完全疯狂的原因,您真的想映射Dataset[Row],则必须提供所需的编码器:

Finally if for some completely crazy reason you really want to map over Dataset[Row] you have to provide required encoder:

import org.apache.spark.sql.catalyst.encoders.RowEncoder import org.apache.spark.sql.types._ import org.apache.spark.sql.Row // Yup, it would be possible to reuse df.schema here val schema = StructType(Seq( StructField("year", IntegerType), StructField("make", StringType), StructField("model", StringType) )) val encoder = RowEncoder(schema) df.map { case Row(year, make: String, model) if make.toLowerCase == "tesla" => Row(year, "S", model) case row => row } (encoder) 

这篇关于尝试将数据帧行映射到更新行时出现编码器错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程技术网(www.editcode.net)!

腾讯云服务器

相关推荐

阿里云服务器
关注微信
^