我想更准确地了解pyspark中数据帧的方法缓存的使用
I want to know more precisely about the use of the method cache for dataframe in pyspark
当我运行 df.cache()
时,它将返回一个数据帧.因此,如果我执行 df2 = df.cache()
,哪个数据帧在缓存中?是 df
, df2
还是两者?
When I run df.cache()
it returns a dataframe. Therefore, if I do df2 = df.cache()
, which dataframe is in cache ? Is it df
, df2
, or both ?
I found the source code DataFrame.cache
def cache(self): """Persists the :class:`DataFrame` with the default storage level (`MEMORY_AND_DISK`). .. note:: The default storage level has changed to `MEMORY_AND_DISK` to match Scala in 2.0. """ self.is_cached = True self._jdf.cache() return self
因此,答案是:两者
这篇关于将数据帧缓存在pyspark中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程技术网(www.editcode.net)!