Currently we are using python to create glue context and then process data from s3 and write the processed parquet files to s3, but it gets stuck during df.write.parquet(s3_path) call, on the localstack backend I see AWS s3.HeadObject => 404 (404), and then eventually timeout with “com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed out” Any suggestions for what’s the cause/solution?

sparkContext = SparkContext.getOrCreate()
print(“Creating spark context”)
hadoop_conf = sparkContext._jsc.hadoopConfiguration()
hadoop_conf.set(“fs.s3a.impl”, “org.apache.hadoop.fs.s3a.S3AFileSystem”)
hadoop_conf.set(“fs.s3a.path.style.access”, “true”)
hadoop_conf.set(“fs.s3a.connection.ssl.enabled”, “false”)
hadoop_conf.set(“com.amazonaws.services.s3a.enableV4”, “true”)
hadoop_conf.set(“fs.s3a.aws.credentials.provider”, “org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider”)
hadoop_conf.set(“fs.s3a.access.key”, “test”)
hadoop_conf.set(“fs.s3a.secret.key”, “test”)
hadoop_conf.set(“fs.s3a.session.token”, “test”)
hadoop_conf.set(“fs.s3a.endpoint”, s3_endpoint)
glueContext = GlueContext(sparkContext)
df = glueContext.create_dynamic_frame.from_options(…)
Then do some processing work, we get processed_df
then we invoke
processed_df.write.parquet(“<s3_path>/output.parquet”)

Below is the stack trace
com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed out
at org.apache.hadoop.fs.s3a.S3AUtils.translateInterruptedException(S3AUtils.java:342)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:177)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
at org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2833)
at org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2808)
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2129)
at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2062)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2330)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:358)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:152)
at org.apache.spark.sql.execution.datasources.SQLEmrOptimizedCommitProtocol.setupJob(SQLEmrOptimizedCommitProtocol.scala:101)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:195)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:185)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:223)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:220)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:181)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)

Any suggestions for what’s the cause/solution?

Hi @b349zhan,

Could you please have a look at our documentation and also our samples?
I hope it will help with the configuration of the script.

Glue | Docs (localstack.cloud)
localstack-pro-samples/glue-etl-jobs (github.com)

1 Like