0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

PySpark で パーティショニングされた parquet ファイルを扱う

0
Last updated at Posted at 2022-09-18

データの読み込み

path = "gs://bucket/table/year=*/month=*/day=*/location=*/*.parquet"
base_path = "gs://bucket/table/"

df = spark.read.option("basePath", base_path).parquet(path)

ちょっとデータ見たり、件数確認したり。

df.head()
df.count()

スキーマの確認

df.printSchema()

カラムの追加

from pyspark.sql.functions import *
df_new = df.withColumn('timestamp', from_unixtime('ts_unix'))

書き出し

path = "gs://bucket/table/"
df.write.option("compression", "snappy").partitionBy("partition_col1","partition_col2").parquet(path)
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?