I've got a simple query working with a java program, but ran into issues with spark integration.
scala> val queryRdd = GeoMesaSpark.rdd(conf, sc, params, q)
Scanning ST index table for feature type SlowStart
Filter: [[[ Where bbox POLYGON ((-79.5 36.5, -79.5 36.6, -79.3 36.6, -79.3 36.5, -79.5 36.5)) ] AND org.geotools.filter.temporal.DuringImpl@26381560] AND [ Activity = 2 ]]
Geometry filters: ArrayBuffer()
Temporal filters: ArrayBuffer()
Other filters: ArrayBuffer([[ Where bbox POLYGON ((-79.5 36.5, -79.5 36.6, -79.3 36.6, -79.3 36.5, -79.5 36.5)) ] AND org.geotools.filter.temporal.DuringImpl@26381560], [ Activity = 2 ])
Tweaked geom filters are ArrayBuffer()
GeomsToCover: ArrayBuffer()
15/05/31 03:58:30 WARN index.STIdxStrategy: Querying Accumulo without SpatioTemporal filter.
STII Filter: No STII Filter
Interval: No interval
Filter: AcceptEverythingFilter
Planning query
Random Partition Planner (5): 0,1,2,3,4
IndexOrDataPlanner: 1
ConstPlanner: SlowStart
GeoHashKeyPlanner: KeyAccept (3)
DatePlanner: start: 0000010100 end: 9999123123
The resulting query took a long time to finish - I think it scanned the entire data set. The same CQL.toFilter() worked fine in my java program, returning results quickly.
Any ideas?
Thanks.
-Simon