Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [geomesa-users] Geomesa Apache Spark Analysis

Hi Max,

GeoMesa uses scala 2.11, and spark 1.6 uses scala 2.10 - the versions aren't compatible. You have a few options:

1. compile GeoMesa for scala 2.10
2. compile spark 1.6 for scala 2.11
3. use spark 2.0 which uses scala 2.11

Our scala 2.10 support in GeoMesa 1.2.6 isn't merged into master - but we have a branch you can use here: https://github.com/tkunicki/geomesa/tree/geomesa-1.2.6-compute_2.10

In our upcoming version 1.3.0 we support scala 2.10 more fully - you can check out our latest pre-release version here: https://github.com/locationtech/geomesa/tree/geomesa_2.11-1.3.0-m2

If you choose to compile GeoMesa for 2.10, there are instructions for how to do it here: https://geomesa.atlassian.net/wiki/pages/viewpage.action?pageId=40337410

The instructions for compiling spark for 2.11 can be found on the spark website.

Thanks,

Emilio

On 12/15/2016 09:38 AM, Zhang, Guangming wrote:

Hi all,

 

I’m new to geomesa so during the installation of spark analysis, I get the following errors.

 

Can someone help check what might be the error?

 

Thank you all.

 

Best,

 

Max.

 

 

 

[max@ebdp-ch2-s032p ~]$ spark-shell --jars /opt/geomesa/geomesa-1.2.6/dist/spark/geomesa-compute-1.2.6-shaded.jar

16/12/12 17:06:42 INFO SecurityManager: Changing view acls to: gzhang200

16/12/12 17:06:42 INFO SecurityManager: Changing modify acls to: gzhang200

16/12/12 17:06:42 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(gzhang200); users with modify permissions: Set(gzhang200)

16/12/12 17:06:43 INFO HttpServer: Starting HTTP Server

16/12/12 17:06:43 INFO Server: jetty-8.y.z-SNAPSHOT

16/12/12 17:06:43 INFO AbstractConnector: Started SocketConnector@0.0.0.0:21995

16/12/12 17:06:43 INFO Utils: Successfully started service 'HTTP class server' on port 21995.

Welcome to

      ____              __

     / __/__  ___ _____/ /__

    _\ \/ _ \/ _ `/ __/  '_/

   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2

      /_/

 

 

scala>     val ds = DataStoreFinder.getDataStore(params).asInstanceOf[AccumuloDataStore]

16/12/12 18:16:01 INFO ZooKeeper: Client environment:zookeeper.version=3.4.6-227--1, built on 09/09/2016 22:17 GMT

16/12/12 18:16:01 INFO ZooKeeper: Client environment:host.name=ebdp-ch2-s032p.sys.comcast.net

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.version=1.8.0_91

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_91/jre

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.class.path=/usr/hdp/current/spark-client/conf/:/usr/hdp/2.4.3.0-227/spark/lib/spark-assembly-1.6.2.2.4.3.0-227-hadoop2.7.1.2.4.3.0-227.jar:/usr/hdp/2.4.3.0-227/spark/lib/datanucleus-core-3.2.10.jar:/usr/hdp/2.4.3.0-227/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/hdp/2.4.3.0-227/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/hdp/current/hadoop-client/conf/

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.library.path=/opt/teradata/client/14.10/tbuild/lib:/usr/lib:/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp

16/12/12 18:16:01 INFO ZooKeeper: Client environment:java.compiler=<NA>

16/12/12 18:16:01 INFO ZooKeeper: Client environment:os.name=Linux

16/12/12 18:16:01 INFO ZooKeeper: Client environment:os.arch=amd64

16/12/12 18:16:01 INFO ZooKeeper: Client environment:os.version=2.6.32-504.30.3.el6.x86_64

16/12/12 18:16:01 INFO ZooKeeper: Client environment:user.name=gzhang200

16/12/12 18:16:01 INFO ZooKeeper: Client environment:user.home=/home/gzhang200

16/12/12 18:16:01 INFO ZooKeeper: Client environment:user.dir=/home/gzhang200

16/12/12 18:16:01 INFO ZooKeeper: Initiating client connection, connectString=ebdp-ch2-s012p.sys.comcast.net,ebdp-ch2-s013p.sys.comcast.net,ebdp-ch2-s014p.sys.comcast.net sessionTimeout=30000 watcher=org.apache.accumulo.fate.zookeeper.ZooSession$ZooWatcher@672e3f24

16/12/12 18:16:01 INFO ClientCnxn: Opening socket connection to server ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:2181. Will not attempt to authenticate using SASL (unknown error)

16/12/12 18:16:01 INFO ClientCnxn: Socket connection established to ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:2181, initiating session

16/12/12 18:16:01 INFO ClientCnxn: Session establishment complete on server ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:2181, sessionid = 0x258b8fa4183041e, negotiated timeout = 30000

java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;

                at org.locationtech.geomesa.security.package$.getAuthorizationsProvider(package.scala:83)

                at org.locationtech.geomesa.accumulo.data.AccumuloDataStoreFactory$.buildAuthsProvider(AccumuloDataStoreFactory.scala:181)

                at org.locationtech.geomesa.accumulo.data.AccumuloDataStoreFactory.createDataStore(AccumuloDataStoreFactory.scala:45)

                at org.locationtech.geomesa.accumulo.data.AccumuloDataStoreFactory.createDataStore(AccumuloDataStoreFactory.scala:29)

                at org.geotools.data.DataAccessFinder.getDataStore(DataAccessFinder.java:130)

                at org.geotools.data.DataStoreFinder.getDataStore(DataStoreFinder.java:89)

                at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)

                at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)

                at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:46)

                at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:48)

                at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:50)

                at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:52)

                at $iwC$$iwC$$iwC$$iwC.<init>(<console>:54)

                at $iwC$$iwC$$iwC.<init>(<console>:56)

                at $iwC$$iwC.<init>(<console>:58)

                at $iwC.<init>(<console>:60)

                at <init>(<console>:62)

                at .<init>(<console>:66)

                at .<clinit>(<console>)

                at .<init>(<console>:7)

                at .<clinit>(<console>)

                at $print(<console>)

                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

                at java.lang.reflect.Method.invoke(Method.java:498)

                at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)

                at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)

                at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)

                at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)

                at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)

                at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)

                at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)

                at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)

                at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)

                at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)

                at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)

                at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)

                at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)

                at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)

                at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)

                at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)

                at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)

                at org.apache.spark.repl.Main$.main(Main.scala:31)

                at org.apache.spark.repl.Main.main(Main.scala)

                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

                at java.lang.reflect.Method.invoke(Method.java:498)

                at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)

                at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

                at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

                at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

                at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

 

 

scala> 16/12/12 22:32:20 INFO RetryInvocationHandler: Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over ebdp-ch2-s035p.sys.comcast.net/172.26.6.223:8020. Trying to fail over immediately.

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby

                at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)

                at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1934)

                at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)

                at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4542)

                at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:1097)

                at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:660)

                at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

                at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

                at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)

                at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273)

                at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269)

                at java.security.AccessController.doPrivileged(Native Method)

                at javax.security.auth.Subject.doAs(Subject.java:422)

                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)

                at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267)

 

                at org.apache.hadoop.ipc.Client.call(Client.java:1455)

                at org.apache.hadoop.ipc.Client.call(Client.java:1392)

                at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)

                at com.sun.proxy.$Proxy25.renewLease(Unknown Source)

                at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:592)

                at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)

                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

                at java.lang.reflect.Method.invoke(Method.java:498)

                at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)

                at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

                at com.sun.proxy.$Proxy26.renewLease(Unknown Source)

                at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:902)

                at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423)

                at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448)

                at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)

                at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304)

                at java.lang.Thread.run(Thread.java:745)

Write failed: Broken pipe

 

Max. 

 



_______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users


Back to the top