Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [geomesa-users] query that checks "between" times does not work for GeoMesaSpark (Jim Hughes)

Hi Emilio,

I have copied the file and no more errors. But now I'm not sure that the filter works. I defined it as:

    val cqlFilter = CQL.toFilter("[[bbox(geom, 34, 46, 35, 45.8)] AND [SQLDATE BETWEEN '2013-02-01T00:00:00.000Z' AND '2013-05-02T00:00:00.000Z']]")

But I see the lines from 2014 at the output, like:


[2015-08-10 13:57:21,640]  INFO SergeSparkTest$: F: 291704271 | Sun Mar 30 20:00:00 EDT 2014 | 190 | UNITED STATES | null | UKRAINE | null | 30.5167 | 50.4333 | Kiev, Ukraine (general), Ukraine | POINT (30.5167 50.4333)

[2015-08-10 13:57:21,640]  INFO SergeSparkTest$: F: 291704275 | Sun Mar 30 20:00:00 EDT 2014 | 192 | UNITED STATES | null | UKRAINE | null | 30.5167 | 50.4333 | Kiev, Ukraine (general), Ukraine | POINT (30.5167 50.4333)

[2015-08-10 13:57:21,640]  INFO SergeSparkTest$: F: 291711809 | Sun Mar 30 20:00:00 EDT 2014 | 190 | VIETNAM | null | IRAQ | null | 30.5167 | 50.4333 | Kiev, Ukraine (general), Ukraine | POINT (30.5167 50.4333)

Thanks,

Serge



On Mon, Aug 10, 2015 at 11:57 AM, Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Hi Serge,

We had to tweak the way the accumulo iterators are configured. I would guess that you just need to deploy a new geomesa-distributed-runtime jar to your accumulo instance. There should be more info in the accumulo logs (which you can view at <accumulo master>:50095/log) - likely it will be a NullPointerException in the Z3Iterator - which in this case means you need to update your jars.

Let me know if that doesn't work.

Thanks,

Emilio


On 08/10/2015 11:51 AM, Serge Vilvovsky wrote:
Hi All,

I have done the pull.

(geomesa)$ git status

On branch master

Your branch is up-to-date with 'origin/master'.

And tried the next CQL:

 val cqlFilter = CQL.toFilter("[[SQLDATE BETWEEN '2012-02-01T00:00:00.000Z' AND '2015-05-02T00:00:00.000Z']]")

But still have a problem:

[2015-08-10 11:41:38,416]  INFO org.apache.spark.SparkContext: Running Spark version 1.4.1

[2015-08-10 11:41:38,582]  WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[2015-08-10 11:41:38,672]  INFO org.apache.spark.SecurityManager: Changing view acls to: se23692

[2015-08-10 11:41:38,673]  INFO org.apache.spark.SecurityManager: Changing modify acls to: se23692

[2015-08-10 11:41:38,673]  INFO org.apache.spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(se23692); users with modify permissions: Set(se23692)

[2015-08-10 11:41:39,143]  INFO akka.event.slf4j.Slf4jLogger: Slf4jLogger started

[2015-08-10 11:41:39,176]  INFO Remoting: Starting remoting

[2015-08-10 11:41:39,308]  INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@172.25.87.161:55560]

[2015-08-10 11:41:39,312]  INFO org.apache.spark.util.Utils: Successfully started service 'sparkDriver' on port 55560.

[2015-08-10 11:41:39,325]  INFO org.apache.spark.SparkEnv: Registering MapOutputTracker

[2015-08-10 11:41:39,334]  INFO org.apache.spark.SparkEnv: Registering BlockManagerMaster

[2015-08-10 11:41:39,350]  INFO org.apache.spark.storage.DiskBlockManager: Created local directory at /private/var/folders/_q/sh2gzyyn0pz_synbyz_8kd6n_z609z/T/spark-822c4b90-8691-4e30-9148-9d8bdbb28a72/blockmgr-89f6e02b-bd16-4dca-bc29-21a012f90ccf

[2015-08-10 11:41:39,354]  INFO org.apache.spark.storage.MemoryStore: MemoryStore started with capacity 1966.1 MB

[2015-08-10 11:41:39,398]  INFO org.apache.spark.HttpFileServer: HTTP File server directory is /private/var/folders/_q/sh2gzyyn0pz_synbyz_8kd6n_z609z/T/spark-822c4b90-8691-4e30-9148-9d8bdbb28a72/httpd-b5a6ea72-3d85-47ff-91a6-142e6b31cdd9

[2015-08-10 11:41:39,400]  INFO org.apache.spark.HttpServer: Starting HTTP Server

[2015-08-10 11:41:39,440]  INFO org.spark-project.jetty.server.Server: jetty-8.y.z-SNAPSHOT

[2015-08-10 11:41:39,453]  INFO org.spark-project.jetty.server.AbstractConnector: Started SocketConnector@0.0.0.0:55562

[2015-08-10 11:41:39,453]  INFO org.apache.spark.util.Utils: Successfully started service 'HTTP file server' on port 55562.

[2015-08-10 11:41:39,464]  INFO org.apache.spark.SparkEnv: Registering OutputCommitCoordinator

[2015-08-10 11:41:39,552]  INFO org.spark-project.jetty.server.Server: jetty-8.y.z-SNAPSHOT

[2015-08-10 11:41:39,561]  INFO org.spark-project.jetty.server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040

[2015-08-10 11:41:39,561]  INFO org.apache.spark.util.Utils: Successfully started service 'SparkUI' on port 4040.

[2015-08-10 11:41:39,562]  INFO org.apache.spark.ui.SparkUI: Started SparkUI at http://172.25.87.161:4040

[2015-08-10 11:41:39,615]  INFO org.apache.spark.executor.Executor: Starting executor ID driver on host localhost

[2015-08-10 11:41:39,723]  INFO org.apache.spark.util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55564.

[2015-08-10 11:41:39,723]  INFO org.apache.spark.network.netty.NettyBlockTransferService: Server created on 55564

[2015-08-10 11:41:39,724]  INFO org.apache.spark.storage.BlockManagerMaster: Trying to register BlockManager

[2015-08-10 11:41:39,727]  INFO org.apache.spark.storage.BlockManagerMasterEndpoint: Registering block manager localhost:55564 with 1966.1 MB RAM, BlockManagerId(driver, localhost, 55564)

[2015-08-10 11:41:39,729]  INFO org.apache.spark.storage.BlockManagerMaster: Registered BlockManager

[2015-08-10 11:41:40,735]  INFO org.apache.spark.storage.MemoryStore: ensureFreeSpace(817680) called with curMem=0, maxMem=2061647216

[2015-08-10 11:41:40,736]  INFO org.apache.spark.storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 798.5 KB, free 1965.4 MB)

[2015-08-10 11:41:40,874]  INFO org.apache.spark.storage.MemoryStore: ensureFreeSpace(69491) called with curMem=817680, maxMem=2061647216

[2015-08-10 11:41:40,874]  INFO org.apache.spark.storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 67.9 KB, free 1965.3 MB)

[2015-08-10 11:41:40,877]  INFO org.apache.spark.storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55564 (size: 67.9 KB, free: 1966.1 MB)

[2015-08-10 11:41:40,879]  INFO org.apache.spark.SparkContext: Created broadcast 0 from newAPIHadoopRDD at GeoMesaSpark.scala:113

[2015-08-10 11:41:41,107]  INFO org.apache.spark.SparkContext: Starting job: count at SergeSparkTest.scala:56

[2015-08-10 11:41:41,119]  INFO org.apache.spark.scheduler.DAGScheduler: Got job 0 (count at SergeSparkTest.scala:56) with 3 output partitions (allowLocal=false)

[2015-08-10 11:41:41,119]  INFO org.apache.spark.scheduler.DAGScheduler: Final stage: ResultStage 0(count at SergeSparkTest.scala:56)

[2015-08-10 11:41:41,119]  INFO org.apache.spark.scheduler.DAGScheduler: Parents of final stage: List()

[2015-08-10 11:41:41,122]  INFO org.apache.spark.scheduler.DAGScheduler: Missing parents: List()

[2015-08-10 11:41:41,126]  INFO org.apache.spark.scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at GeoMesaSpark.scala:113), which has no missing parents

[2015-08-10 11:41:41,145]  INFO org.apache.spark.storage.MemoryStore: ensureFreeSpace(2464) called with curMem=887171, maxMem=2061647216

[2015-08-10 11:41:41,145]  INFO org.apache.spark.storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.4 KB, free 1965.3 MB)

[2015-08-10 11:41:41,149]  INFO org.apache.spark.storage.MemoryStore: ensureFreeSpace(1480) called with curMem=889635, maxMem=2061647216

[2015-08-10 11:41:41,150]  INFO org.apache.spark.storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1480.0 B, free 1965.3 MB)

[2015-08-10 11:41:41,150]  INFO org.apache.spark.storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:55564 (size: 1480.0 B, free: 1966.1 MB)

[2015-08-10 11:41:41,151]  INFO org.apache.spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874

[2015-08-10 11:41:41,156]  INFO org.apache.spark.scheduler.DAGScheduler: Submitting 3 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at GeoMesaSpark.scala:113)

[2015-08-10 11:41:41,156]  INFO org.apache.spark.scheduler.TaskSchedulerImpl: Adding task set 0.0 with 3 tasks

[2015-08-10 11:41:41,317]  WARN org.apache.spark.scheduler.TaskSetManager: Stage 0 contains a task of very large size (12913 KB). The maximum recommended task size is 100 KB.

[2015-08-10 11:41:41,318]  INFO org.apache.spark.scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 13223471 bytes)

[2015-08-10 11:41:41,323]  INFO org.apache.spark.executor.Executor: Running task 0.0 in stage 0.0 (TID 0)

[2015-08-10 11:41:41,545]  INFO org.apache.spark.rdd.NewHadoopRDD: Input split: mapreduce.GroupedSplit[localhost](2644)

[2015-08-10 11:41:42,679] ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 127.0.0.1:9997

at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)

at org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat$1.nextKeyValue(AccumuloInputFormat.java:66)

at org.locationtech.geomesa.jobs.mapreduce.GeoMesaRecordReader.nextKeyValueInternal(GeoMesaInputFormat.scala:226)

at org.locationtech.geomesa.jobs.mapreduce.GeoMesaRecordReader.nextKeyValue(GeoMesaInputFormat.scala:216)

at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)

at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1626)

at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1099)

at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1099)

at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)

at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)

at org.apache.spark.scheduler.Task.run(Task.scala:70)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 127.0.0.1:9997

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:273)

at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)

at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)

... 17 more

Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan

at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)

at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)

at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:209)

at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:186)

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:387)

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:266)

... 19 more

[2015-08-10 11:41:42,772]  INFO org.apache.spark.scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 13224854 bytes)

[2015-08-10 11:41:42,773]  INFO org.apache.spark.executor.Executor: Running task 1.0 in stage 0.0 (TID 1)

[2015-08-10 11:41:42,774]  WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 127.0.0.1:9997

at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)

at org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat$1.nextKeyValue(AccumuloInputFormat.java:66)

at org.locationtech.geomesa.jobs.mapreduce.GeoMesaRecordReader.nextKeyValueInternal(GeoMesaInputFormat.scala:226)

at org.locationtech.geomesa.jobs.mapreduce.GeoMesaRecordReader.nextKeyValue(GeoMesaInputFormat.scala:216)

at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)

at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1626)

at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1099)

at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1099)

at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)

at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)

at org.apache.spark.scheduler.Task.run(Task.scala:70)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 127.0.0.1:9997

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:273)

at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)

at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)

... 17 more

Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan

at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)

at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)

at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:209)

at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:186)

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:387)

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:266)

... 19 more


[2015-08-10 11:41:42,775] ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job

[2015-08-10 11:41:42,779]  INFO org.apache.spark.scheduler.TaskSchedulerImpl: Cancelling stage 0

[2015-08-10 11:41:42,781]  INFO org.apache.spark.executor.Executor: Executor is trying to kill task 1.0 in stage 0.0 (TID 1)

[2015-08-10 11:41:42,781]  INFO org.apache.spark.scheduler.TaskSchedulerImpl: Stage 0 was cancelled

[2015-08-10 11:41:42,781]  INFO org.apache.spark.scheduler.DAGScheduler: ResultStage 0 (count at SergeSparkTest.scala:56) failed in 1.619 s

[2015-08-10 11:41:42,783]  INFO org.apache.spark.scheduler.DAGScheduler: Job 0 failed: count at SergeSparkTest.scala:56, took 1.676203 s

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 127.0.0.1:9997

at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)

at org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat$1.nextKeyValue(AccumuloInputFormat.java:66)

at org.locationtech.geomesa.jobs.mapreduce.GeoMesaRecordReader.nextKeyValueInternal(GeoMesaInputFormat.scala:226)

at org.locationtech.geomesa.jobs.mapreduce.GeoMesaRecordReader.nextKeyValue(GeoMesaInputFormat.scala:216)

at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)

at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1626)

at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1099)

at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1099)

at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)

at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)

at org.apache.spark.scheduler.Task.run(Task.scala:70)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 127.0.0.1:9997

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:273)

at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)

at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)

... 17 more

Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan

at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)

at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)

at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:209)

at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:186)

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:387)

at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:266)

... 19 more


Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)

at scala.Option.foreach(Option.scala:236)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

[2015-08-10 11:41:42,786]  INFO org.apache.spark.SparkContext: Invoking stop() from shutdown hook

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}

[2015-08-10 11:41:42,810]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}

[2015-08-10 11:41:42,811]  INFO org.spark-project.jetty.server.handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}

[2015-08-10 11:41:42,844] ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 0.0 (TID 1)

org.apache.spark.TaskKilledException

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:204)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

[2015-08-10 11:41:42,845]  WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost): org.apache.spark.TaskKilledException

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:204)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)


[2015-08-10 11:41:42,845]  INFO org.apache.spark.scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 

[2015-08-10 11:41:42,863]  INFO org.apache.spark.ui.SparkUI: Stopped Spark web UI at http://172.25.87.161:4040

[2015-08-10 11:41:42,864]  INFO org.apache.spark.scheduler.DAGScheduler: Stopping DAGScheduler

[2015-08-10 11:41:42,921]  INFO org.apache.spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!

[2015-08-10 11:41:42,925]  INFO org.apache.spark.util.Utils: path = /private/var/folders/_q/sh2gzyyn0pz_synbyz_8kd6n_z609z/T/spark-822c4b90-8691-4e30-9148-9d8bdbb28a72/blockmgr-89f6e02b-bd16-4dca-bc29-21a012f90ccf, already present as root for deletion.

[2015-08-10 11:41:42,925]  INFO org.apache.spark.storage.MemoryStore: MemoryStore cleared

[2015-08-10 11:41:42,925]  INFO org.apache.spark.storage.BlockManager: BlockManager stopped

[2015-08-10 11:41:42,929]  INFO org.apache.spark.storage.BlockManagerMaster: BlockManagerMaster stopped

[2015-08-10 11:41:42,931]  INFO org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!

[2015-08-10 11:41:42,931]  INFO org.apache.spark.SparkContext: Successfully stopped SparkContext

[2015-08-10 11:41:42,931]  INFO org.apache.spark.util.Utils: Shutdown hook called

[2015-08-10 11:41:42,932]  INFO org.apache.spark.util.Utils: Deleting directory /private/var/folders/_q/sh2gzyyn0pz_synbyz_8kd6n_z609z/T/spark-822c4b90-8691-4e30-9148-9d8bdbb28a72




_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
http://www.locationtech.org/mailman/listinfo/geomesa-users



Back to the top