Skip to content

4521840 upgrade scala213 spark35 #562

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Jun 2, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
6c3d267
Initial commits for Scala 2.13.16 and Spark 3.5.5 Upgrade
cbharadwajp Mar 19, 2025
6ec3605
Fix the unit test compilation
cbharadwajp Mar 19, 2025
9269f18
Update the upload-artifact version for github actions
cbharadwajp Mar 19, 2025
5c29c14
Install java and sbt before build the project
cbharadwajp Mar 20, 2025
37c64b6
Add the sbt repo details
cbharadwajp Mar 20, 2025
49c249b
Update the upload path for built artifact
cbharadwajp Mar 20, 2025
0951014
Fix the tests and upgrade download-artifacts for integration tests
cbharadwajp Mar 20, 2025
0e0542f
Fix the code coverage upload issue for test cases for integration tests
cbharadwajp Mar 24, 2025
8882083
Fix pipeline for codecov and run-integration tests
cbharadwajp Apr 1, 2025
4d1c3e5
Fix functional and performance testing folder with the upgrade of sca…
cbharadwajp Apr 1, 2025
6befdf4
Install docker for integration tests
cbharadwajp Apr 1, 2025
2a46983
Install docker for integration tests
cbharadwajp Apr 1, 2025
c9091a4
Install docker for integration tests
cbharadwajp Apr 1, 2025
2fe6bf2
Upgrade the codecov version and print the env
cbharadwajp Apr 1, 2025
e139b33
Debug codecov token
cbharadwajp Apr 2, 2025
f832d87
Revert debug codecov token
cbharadwajp Apr 2, 2025
caf2e3b
Set codecov token in env
cbharadwajp Apr 2, 2025
b7a13db
Set codecov token in env
cbharadwajp Apr 2, 2025
9b5d07f
Revert set codecov token in env
cbharadwajp Apr 2, 2025
48d7b13
Downgrade codecov action
cbharadwajp Apr 2, 2025
107ab5a
Update the codecov token
cbharadwajp Apr 2, 2025
d112f63
Comment the integration tests as bitnami supported is not available
cbharadwajp Apr 2, 2025
fbf00f7
Upgrade vertica jdbc driver to 24.4
cbharadwajp Apr 8, 2025
59d922e
Fix the clean up of staging directory for function test suite
cbharadwajp Apr 16, 2025
420ec22
Add cats-core dependency for Vertica 24.4 driver
cbharadwajp Apr 17, 2025
351c0eb
Update the README.md
cbharadwajp Apr 21, 2025
dd09e51
Update the README.md
cbharadwajp Apr 21, 2025
d9c4e5f
Fix the functional test case.
cbharadwajp Apr 21, 2025
8dafd9e
Upload fat and slim jar to artifacts
cbharadwajp May 23, 2025
d5847fb
Incorporate review comments
cbharadwajp May 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fix functional and performance testing folder with the upgrade of sca…
…la and spark
  • Loading branch information
cbharadwajp committed Apr 1, 2025
commit 4d1c3e5152b7c245794133b812218d0d25904de7
23 changes: 14 additions & 9 deletions functional-tests/build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
import java.util.Properties
import java.io.File

// Retrieving the connector version number from a common file.
val versionProps = settingKey[Properties]("Connector version properties")
Expand Down Expand Up @@ -38,22 +39,26 @@ val hadoopVersion = Option(System.getProperty("hadoopVersion")) match {
resolvers += "Artima Maven Repository" at "https://repo.artima.com/releases"
resolvers += "jitpack" at "https://jitpack.io"

libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.2"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.16"
libraryDependencies += "com.typesafe" % "config" % "1.4.1"

libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "1.1.2"
libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "2.3.0"
libraryDependencies += "com.vertica.jdbc" % "vertica-jdbc" % "11.0.2-0"
libraryDependencies += "org.apache.spark" %% "spark-core" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-sql" % sparkVersion
libraryDependencies += "org.scalactic" %% "scalactic" % "3.2.2"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.2" % "test"
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.2"
libraryDependencies += "org.scalamock" %% "scalamock" % "4.4.0" % Test
libraryDependencies += "org.typelevel" %% "cats-core" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.5.5"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "3.5.5"
libraryDependencies += "org.scalactic" %% "scalactic" % "3.2.16"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.16" % "test"
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.5"
libraryDependencies += "org.scalamock" %% "scalamock" % "5.2.0" % Test
libraryDependencies += "org.typelevel" %% "cats-core" % "2.10.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % hadoopVersion
libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % hadoopVersion
libraryDependencies += "com.github.scopt" %% "scopt" % "4.0.1"
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "hadoop3-2.2.6"
//libraryDependencies += file("C:\\Users\\chaitanp\\SourceCode\\spark\\spark-connector\\connector\\target\\scala-2.13\\spark-vertica-connector-assembly-3.3.6.jar")

Compile / unmanagedJars += file("../connector/target/scala-2.13/spark-vertica-connector-assembly-3.3.6.jar")


assembly / assemblyJarName := s"vertica-spark-functional-tests.jar"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import org.apache.spark.sql._
import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
import org.scalatest.{Assertion, BeforeAndAfterAll, BeforeAndAfterEach}
import org.scalatest.flatspec.AnyFlatSpec

Expand Down Expand Up @@ -2892,7 +2893,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("hiredate", DateType, nullable=false),
StructField("region", StringType, nullable=false)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("fullname1", 35, null, Date.valueOf("2009-09-09"), "south"),
Row("fullname2", null, null, Date.valueOf("2019-09-09"), "north")
))
Expand Down Expand Up @@ -2937,7 +2938,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("age", IntegerType, nullable=true),
StructField("region", StringType, nullable=false)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("fullname1", 35, null, "south")
))

Expand Down Expand Up @@ -2979,7 +2980,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("age", IntegerType, nullable=true),
StructField("age", IntegerType, nullable=true)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row(1, 2, 3, 4, 5)
))

Expand Down Expand Up @@ -3018,7 +3019,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("hire_date", DateType, nullable=false),
StructField("region", StringType, nullable=false)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("fn","mn","ln", Date.valueOf("2015-03-18"), "west")
))

Expand Down Expand Up @@ -3062,7 +3063,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("region", StringType, nullable=false),
StructField("hiredate", DateType, nullable=false)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("fn", 1, "south", Date.valueOf("2015-03-18"))
))

Expand Down Expand Up @@ -3113,7 +3114,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("hire_date", DateType, nullable=false),
StructField("location", StringType, nullable=false)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("fn", 1, Date.valueOf("2015-03-18"), "south")
))

Expand Down Expand Up @@ -3161,7 +3162,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
StructField("age", IntegerType, nullable=true),
StructField("hiredate", DateType, nullable=false)
))
val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("fn", "north", 30, Date.valueOf("2018-05-22"))
))

Expand Down Expand Up @@ -3204,7 +3205,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
"target_table_sql" -> target_table_ddl
)

val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("name1", 30)
))
val schema = StructType(Array(
Expand Down Expand Up @@ -3243,7 +3244,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
"target_table_sql" -> target_table_ddl
)

val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("name1", 30)
))
val schema = StructType(Array(
Expand Down Expand Up @@ -3279,7 +3280,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
"copy_column_list" -> copy_column_list
)

val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("name1", 30)
))
val schema = StructType(Array(
Expand Down Expand Up @@ -3315,7 +3316,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String
val options = writeOpts + ("table" -> tableName,
"target_table_sql" -> target_table_ddl)

val rows = spark.sparkContext.parallelize(Array(
val rows = spark.sparkContext.parallelize(Array[Row](
Row("name1", 30, "west")
))
val schema = StructType(Array(
Expand Down Expand Up @@ -3568,7 +3569,7 @@ class EndToEndTests(readOpts: Map[String, String], writeOpts: Map[String, String

val readDf: DataFrame = spark.read.format("com.vertica.spark.datasource.VerticaSource").options(readOpts + ("table" -> tableName)).load()
val dfDecimal = readDf.head.getDecimal(0).floatValue()
val dataDecimal = data.head.getAs[scala.math.BigDecimal](0).floatValue()
val dataDecimal: Float = df.head.getAs[BigDecimal](0).toFloat
assert(dfDecimal == dataDecimal)
assert(readDf.head.getLong(1) == data.head.getInt(1))
}
Expand Down
3 changes: 2 additions & 1 deletion performance-tests/build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ version := "1.0"
resolvers += "Artima Maven Repository" at "https://repo.artima.com/releases"
resolvers += "jitpack" at "https://jitpack.io"

libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.2"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.16"
libraryDependencies += "com.typesafe" % "config" % "1.4.1"

libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "2.3.0"
Expand All @@ -33,6 +33,7 @@ libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.5"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"
libraryDependencies += "org.scalamock" %% "scalamock" % "5.2.0" % Test
libraryDependencies += "org.typelevel" %% "cats-core" % "2.3.0"
Compile / unmanagedJars += file("../connector/target/scala-2.13/spark-vertica-connector-assembly-3.3.6.jar")

assembly / assemblyMergeStrategy := {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
Expand Down
Loading