Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [geomesa-users] Compilaiton error in Java when applying "map" to an RDD generated via GeoMesaSpark.rdd

On 09/05/15 22:19, James Hughes wrote:

Language issues aside, for your case, it sounds like Spark 1.3.1 may work out.  I
missed the details, but some folks who are using GeoMesa internally also ran into
issues with Spark 1.3.0.

Since Spark is a compiled dependency and we don't package it, I think you'll be
able to update the Spark versions in your pom and see your tests work.  (That's
the very best case scenario.)

Unfortunately, it did not work. I installed Spark 1.3.1 in place of 1.3.0-SNAPSHOT and re-run the integration test, to no avail.


If upgrading Spark doesn't help, you may have to try compiling with Java 6 or 7
(assuming that you don't have too many lambdas to rewrite), or compile your
functional Java code with Java 8 and try to compile the rest with Java 7.

It looks like pain to me.

For the time being I patched my code with this utterly stolid (and doomed to fail with any sizeable amount of data) method:

  public static JavaRDD<Tweet> loadFromGeoMesaTable(JavaSparkContext sc,
      GeoMesaOptions options) throws IOException, SchemaException {

    List<Tweet> featList = new ArrayList<Tweet>();
    SimpleFeatureIterator featIter = TweetFeatureStore.getFeatureType(options)
        .getFeatures().features();
    try {
      while (featIter.hasNext()) {
        featList.add(new Tweet(featIter.next()));
      }
    } finally {
      featIter.close();
    }

    return sc.parallelize(Lists.newArrayList(featList));
  }

I know it sucks... but is there a better way to build an RDD out of a FeatureType using parallelize?

Thanks again for your time&patience,

Luca Morandini
Data Architect - AURIN project
Melbourne eResearch Group
Department of Computing and Information Systems
University of Melbourne
Tel. +61 03 903 58 380
Skype: lmorandini


Back to the top