Hunter,
It finished relatively quickly. this was the
output.
9hzt3m1:geomesa-gdelt
kelly.oconor$ hadoop jar
target/geomesa-gdelt-1.0-SNAPSHOT.jar
geomesa.gdelt.GDELTIngest -instanceId lumify
-zookeepers localhost -user root -password
password -auths kelly -visibilities kelly
-tableName gdelt10 -featureName gdelt
-ingestFile
hdfs:///Users/kelly.oconor/geomesa-gdelt/data/20140612.export.CSV
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.5-1392090,
built on 09/30/2012 17:52 GMT
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:host.name=cyyzqm1.invertix.int
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.version=1.7.0_60
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.vendor=Oracle Corporation
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/jre
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.class.path=/usr/local/Cellar/hadoop/2.4.0/libexec/etc/hadoop:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/activation-1.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/asm-3.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/Cellar/hadoop/2.4.
0/libexe
c/share/hadoop/com
mon/lib/commons-digester-1.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/httpclient-4.2.5.jar:/u
sr/local
/Cellar/hadoop/2.4
.0/libexec/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoo
p/common
/lib/jersey-json-1
.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/netty-3.6.2
.Final.j
ar:/usr/local/Cell
ar/hadoop/2.4.0/libexec/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/xz-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/common/hadoop-common-
2.4.0.ja
r:/usr/local/Cella
r/hadoop/2.4.0/libexec/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/li
bexec/sh
are/hadoop/hdfs/li
b/jackson-mapper-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/had
oop/hdfs
/lib/xmlenc-0.52.j
ar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/l
ib/commo
ns-io-2.4.jar:/usr
/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jaxb-i
mpl-2.2.
3-1.jar:/usr/local
/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/Cella
r/hadoop
/2.4.0/libexec/sha
re/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/Cellar/hadoop/2
.4.0/lib
exec/share/hadoop/
yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/sh
are/hado
op/mapreduce/lib/g
uice-servlet-3.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/netty
-3.6.2.F
inal.jar:/usr/loca
l/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/Cellar/h
adoop/2.
4.0/libexec/share/
hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/Cellar/hadoop/2.4.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/contrib/capacity-scheduler/*.jar
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.library.path=/Users/kelly.oconor/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/var/folders/ry/lcr2v2rs0y9c60smxy4lyv893tdl0b/T/
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:os.name=Mac OS X
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:os.arch=x86_64
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:os.version=10.9.2
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:user.name=kelly.oconor
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:user.home=/Users/kelly.oconor
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Client
environment:user.dir=/Users/kelly.oconor/geomesa-gdelt
14/06/25 16:18:18 INFO
zookeeper.ZooKeeper: Initiating client
connection, connectString=localhost
sessionTimeout=30000
watcher=org.apache.accumulo.fate.zookeeper.ZooSession$ZooWatcher@64c40106
14/06/25 16:18:18 INFO
zookeeper.ClientCnxn: Opening socket
connection to server localhost/127.0.0.1:2181.
Will not attempt to authenticate using SASL
(unknown error)
14/06/25 16:18:18 INFO
zookeeper.ClientCnxn: Socket connection
established to localhost/127.0.0.1:2181,
initiating session
14/06/25 16:18:18 INFO
zookeeper.ClientCnxn: Session establishment
complete on server localhost/127.0.0.1:2181,
sessionid = 0x146d3fcb5de0044, negotiated
timeout = 30000
14/06/25 16:18:20 WARN
util.NativeCodeLoader: Unable to load
native-hadoop library for your platform...
using builtin-java classes where applicable
14/06/25 16:18:21 INFO
Configuration.deprecation: session.id is
deprecated. Instead, use
dfs.metrics.session-id
14/06/25 16:18:21 INFO
jvm.JvmMetrics: Initializing JVM Metrics with
processName=JobTracker, sessionId=
14/06/25 16:18:21 WARN
mapreduce.JobSubmitter: Hadoop command-line
option parsing not performed. Implement the
Tool interface and execute your application
with ToolRunner to remedy this.
14/06/25 16:18:21 WARN
mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or
Job#setJar(String).
14/06/25 16:18:21 INFO
input.FileInputFormat: Total input paths to
process : 1
14/06/25 16:18:21 INFO
mapreduce.JobSubmitter: number of splits:1
14/06/25 16:18:21 INFO
mapreduce.JobSubmitter: Submitting tokens for
job: job_local1483915024_0001
14/06/25 16:18:21 WARN
conf.Configuration: file:/tmp/hadoop-kelly.oconor/mapred/staging/kelly.oconor1483915024/.staging/job_local1483915024_0001/job.xml:an
attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;
Ignoring.
14/06/25 16:18:21 WARN
conf.Configuration: file:/tmp/hadoop-kelly.oconor/mapred/staging/kelly.oconor1483915024/.staging/job_local1483915024_0001/job.xml:an
attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;
Ignoring.
14/06/25 16:19:57 INFO
mapred.LocalDistributedCacheManager: Creating
symlink:
/tmp/hadoop-kelly.oconor/mapred/local/1403727501922/geomesa-gdelt-1.0-SNAPSHOT.jar
<-
/Users/kelly.oconor/geomesa-gdelt/geomesa-gdelt-1.0-SNAPSHOT.jar
14/06/25 16:19:57 INFO
mapred.LocalDistributedCacheManager: Localized
hdfs://localhost:9000/tmp/geomesa-gdelt-1.0-SNAPSHOT.jar
as
file:/tmp/hadoop-kelly.oconor/mapred/local/1403727501922/geomesa-gdelt-1.0-SNAPSHOT.jar
14/06/25 16:19:57 WARN
conf.Configuration: file:/tmp/hadoop-kelly.oconor/mapred/local/localRunner/kelly.oconor/job_local1483915024_0001/job_local1483915024_0001.xml:an
attempt to override final parameter:
mapreduce.job.end-notification.max.retry.interval;
Ignoring.
14/06/25 16:19:57 WARN
conf.Configuration: file:/tmp/hadoop-kelly.oconor/mapred/local/localRunner/kelly.oconor/job_local1483915024_0001/job_local1483915024_0001.xml:an
attempt to override final parameter:
mapreduce.job.end-notification.max.attempts;
Ignoring.
14/06/25 16:19:57 INFO
mapred.LocalDistributedCacheManager: file:/tmp/hadoop-kelly.oconor/mapred/local/1403727501922/geomesa-gdelt-1.0-SNAPSHOT.jar/
14/06/25 16:19:57 INFO
mapreduce.Job: The url to track the job:
http://localhost:8080/
14/06/25 16:19:57 INFO
mapreduce.Job: Running job:
job_local1483915024_0001
14/06/25 16:19:57 INFO
mapred.LocalJobRunner: OutputCommitter set in
config null
14/06/25 16:19:57 INFO
mapred.LocalJobRunner: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/06/25 16:19:57 INFO
mapred.LocalJobRunner: Waiting for map tasks
14/06/25 16:19:57 INFO
mapred.LocalJobRunner: Starting task:
attempt_local1483915024_0001_m_000000_0
14/06/25 16:19:57 INFO
util.ProcfsBasedProcessTree:
ProcfsBasedProcessTree currently is supported
only on Linux.
14/06/25 16:19:57 INFO
mapred.Task: Using
ResourceCalculatorProcessTree : null
14/06/25 16:19:57 INFO
mapred.MapTask: Processing split:
hdfs://localhost:9000/Users/kelly.oconor/geomesa-gdelt/data/20140612.export.CSV:0+0
14/06/25 16:19:57 INFO
mapred.MapTask: Map output collector class =
org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/06/25 16:19:57 INFO
mapred.MapTask: (EQUATOR) 0 kvi
26214396(104857584)
14/06/25 16:19:57 INFO
mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/06/25 16:19:57 INFO
mapred.MapTask: soft limit at 83886080
14/06/25 16:19:57 INFO
mapred.MapTask: bufstart = 0; bufvoid =
104857600
14/06/25 16:19:58 INFO
mapred.MapTask: kvstart = 26214396; length =
6553600
14/06/25 16:19:58 INFO
mapred.LocalJobRunner:
14/06/25 16:19:58 INFO
mapred.MapTask: Starting flush of map output
14/06/25 16:19:58 INFO
mapred.Task:
Task:attempt_local1483915024_0001_m_000000_0
is done. And is in the process of committing
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: map
14/06/25 16:19:58 INFO
mapred.Task: Task
'attempt_local1483915024_0001_m_000000_0'
done.
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: Finishing task:
attempt_local1483915024_0001_m_000000_0
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: map task executor
complete.
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: Waiting for reduce
tasks
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: Starting task:
attempt_local1483915024_0001_r_000000_0
14/06/25 16:19:58 INFO
util.ProcfsBasedProcessTree:
ProcfsBasedProcessTree currently is supported
only on Linux.
14/06/25 16:19:58 INFO
mapred.Task: Using
ResourceCalculatorProcessTree : null
14/06/25 16:19:58 INFO
mapred.ReduceTask: Using
ShuffleConsumerPlugin:
org.apache.hadoop.mapreduce.task.reduce.Shuffle@2d3913db
14/06/25 16:19:58 INFO
reduce.MergeManagerImpl: MergerManager:
memoryLimit=333971456,
maxSingleShuffleLimit=83492864,
mergeThreshold=220421168, ioSortFactor=10,
memToMemMergeOutputsThreshold=10
14/06/25 16:19:58 INFO
reduce.EventFetcher:
attempt_local1483915024_0001_r_000000_0 Thread
started: EventFetcher for fetching Map
Completion Events
14/06/25 16:19:58 INFO
reduce.LocalFetcher: localfetcher#1 about to
shuffle output of map
attempt_local1483915024_0001_m_000000_0
decomp: 2 len: 6 to MEMORY
14/06/25 16:19:58 INFO
reduce.InMemoryMapOutput: Read 2 bytes from
map-output for
attempt_local1483915024_0001_m_000000_0
14/06/25 16:19:58 INFO
reduce.MergeManagerImpl: closeInMemoryFile
-> map-output of size: 2,
inMemoryMapOutputs.size() -> 1,
commitMemory -> 0, usedMemory ->2
14/06/25 16:19:58 INFO
reduce.EventFetcher: EventFetcher is
interrupted.. Returning
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: 1 / 1 copied.
14/06/25 16:19:58 INFO
reduce.MergeManagerImpl: finalMerge called
with 1 in-memory map-outputs and 0 on-disk
map-outputs
14/06/25 16:19:58 INFO
mapred.Merger: Merging 1 sorted segments
14/06/25 16:19:58 INFO
mapred.Merger: Down to the last merge-pass,
with 0 segments left of total size: 0 bytes
14/06/25 16:19:58 INFO
reduce.MergeManagerImpl: Merged 1 segments, 2
bytes to disk to satisfy reduce memory limit
14/06/25 16:19:58 INFO
reduce.MergeManagerImpl: Merging 1 files, 6
bytes from disk
14/06/25 16:19:58 INFO
reduce.MergeManagerImpl: Merging 0 segments, 0
bytes from memory into reduce
14/06/25 16:19:58 INFO
mapred.Merger: Merging 1 sorted segments
14/06/25 16:19:58 INFO
mapred.Merger: Down to the last merge-pass,
with 0 segments left of total size: 0 bytes
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: 1 / 1 copied.
14/06/25 16:19:58 INFO
Configuration.deprecation: mapred.skip.on is
deprecated. Instead, use
mapreduce.job.skiprecords
14/06/25 16:19:58 INFO
mapred.Task:
Task:attempt_local1483915024_0001_r_000000_0
is done. And is in the process of committing
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: reduce > reduce
14/06/25 16:19:58 INFO
mapred.Task: Task
'attempt_local1483915024_0001_r_000000_0'
done.
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: Finishing task:
attempt_local1483915024_0001_r_000000_0
14/06/25 16:19:58 INFO
mapred.LocalJobRunner: reduce task executor
complete.
14/06/25 16:19:58 INFO
mapreduce.Job: Job job_local1483915024_0001
running in uber mode : false
14/06/25 16:19:58 INFO
mapreduce.Job: map 100% reduce 100%
14/06/25 16:19:58 INFO
mapreduce.Job: Job job_local1483915024_0001
completed successfully
14/06/25 16:19:58 INFO
mapreduce.Job: Counters: 35
File
System Counters
FILE: Number
of bytes read=78321530
FILE: Number
of bytes written=79398018
FILE: Number
of read operations=0
FILE: Number
of large read operations=0
FILE: Number
of write operations=0
HDFS: Number
of bytes read=78321110
HDFS: Number
of bytes written=78321110
HDFS: Number
of read operations=39
HDFS: Number
of large read operations=0
HDFS: Number
of write operations=8
Map-Reduce
Framework
Map input
records=0
Map output
records=0
Map output
bytes=0
Map output
materialized bytes=6
Input split
bytes=144
Combine input
records=0
Combine output
records=0
Reduce input
groups=0
Reduce shuffle
bytes=6
Reduce input
records=0
Reduce output
records=0
Spilled
Records=0
Shuffled Maps
=1
Failed
Shuffles=0
Merged Map
outputs=1
GC time
elapsed (ms)=8
Total
committed heap usage (bytes)=632291328
Shuffle
Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File
Input Format Counters
Bytes Read=0
File
Output Format Counters
Bytes
Written=0
sorry that is alot. just really
trying to get this resolved.
after this ran, i looked in the
accumlo table created and there were only 6
entries.
Kelly,
How quickly did the map reduce job finish?
One possible issue could be that you need to
unzip the .zip file so that the file format
is TSV.
In case that doesn't help, I will also try a
few tests of my own and get back to you.
Hunter
On 06/25/2014
04:31 PM, Kelly O'Conor wrote:
Hi!
Trying to work throught the
GeoMesa-GDELT tutorial and getting
stumped while ingesting the data
downloaded from the GDELT link
provided.
After running: (ls
-1 | xargs -n 1 zcat) | hadoop fs
-put -
/Users/kelly.oconor/geomesa-gdelt/GDELT.MASTERREDUCEDV2.1979-2013.zip
Then
hadoop jar
target/geomesa-gdelt-1.0-SNAPSHOT.jar
geomesa.gdelt.GDELTIngest
-instanceId lumify -zookeepers
localhost -user root -password
password -auths kelly -visibilities
kelly -tableName gdelt10
-featureName gdelt -ingestFile
hdfs:///Users/kelly.oconor/geomesa-gdelt/GDELT.MASTERREDUCEDV2.1979-2013.zip
I have
noticed that in my accumulo tables
there only appears to be 6
entries.
The Map Reduce
job seems to finish so I am at a
loss for where the data is going.
How do I get the entries into the
table?
Thanks
Kelly O’Conor
_______________________________________________
geomesa-dev mailing list
geomesa-dev@xxxxxxxxxxxxxxxx
http://locationtech.org/mailman/listinfo/geomesa-dev