In Scala Spark, I can easily add a column to an existing Dataframe writing val newDf = df.withColumn("date_min", anotherDf("date_min")) Doing so in PySpark results in an AnalysisException. Here is what I'm doing : minDf.show(5) maxDf.show(5) +-----------...