Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [geomesa-dev] Problems running GeoMesa GDELT tutorial

Bob,

I've checked the Mapper, and it does appear that it's trying to access
the fields without first checking that the line is split into the
requisite number of parts.

You and Chris seem to be on the right track:  There is probably
something wrong with the input data.  I'm going to add a new ticket here
for us to add the "are there enough fields?" check to the mapper so that
it does not throw an exception in this case.

We will post back to this list when the project is updated.

Thanks for writing in.  Please let us know -- either here or on
geomesa-users@xxxxxxxxxxxxxxxx -- if/as you encounter additional issues.

Sincerely,
  -- Chris

_______________

Chris Eichelberger
Commonwealth Computer Research, Inc.
434-284-9422 (work)






On Thu, 2014-05-08 at 22:20 +0000, Chris Snider wrote:
> Bob,
> 
>  
> 
> I haven’t run the gdelt dataset myself yet.  That is something I plan
> on doing tomorrow or early next week as I learn more about Hadoop.
> However, when I see errors like that, it is usually my data that I
> have messed up somehow.  I agree that it would be a good thing to try
> a smaller dataset and see if the job completes.
> 
>  
> 
> I’ll post back to the list if I am successful or run into the
> same/other issues.
> 
>  
> 
> Chris Snider
> 
> Senior Software Engineer
> 
> Intelligent Software Solutions, Inc.
> 
> Direct (719) 452-7257
> 
> Description: Description: Description:
> cid:image001.png@01CA1F1F.CBC93990
> 
> 
>  
> 
> From: Barnhart, Bob M. [mailto:ROBERT.M.BARNHART@xxxxxxxxxx] 
> Sent: Thursday, May 08, 2014 3:26 PM
> To: Chris Snider; Discussions between GeoMesa committers
> Subject: RE: [geomesa-dev] Problems running GeoMesa GDELT tutorial
> 
> 
>  
> 
> Chris,
> 
>  
> 
> Thanks for the fix…it worked fine using an ad-hoc authorization
> (“GDELT” vs. “COMMA”).
> 
>  
> 
> I don’t know whether I can collaborate with you directly (as opposed
> to mailing to geomesa-dev@xxxxxxxxxxxxxxxx) but I thought I’d see if
> you were willing to take a look at another issue.
> 
>  
> 
> Still trying to ingest the data loaded into Hadoop
> fromhttp://data.gdeltproject.org/events/GDELT.MASTERREDUCEDV2.1979-2013.zip, I’m now getting an “ArrayIndexOutOfBoundsException” error (see below).
> 
>  
> 
> I’m not sure if (1) the data might be at issue, or (2) my
> Accumulo/Hadoop installation might be at issue. I’m tempted to try
> loading a smaller dataset, perhaps only (some of) the 2014 files from
> http://data.gdeltproject.org/events/index.html.
> 
>  
> 
> Any ideas what might be going on here?
> 
>  
> 
> Thanks,
> 
> Bob Barnhart
> 
>  
> 
> (ingest log…)
> 
>  
> 
> -----------------------------------------------------
> 
> Running: hadoop jar ./target/geomesa-gdelt-1.0-SNAPSHOT.jar
>                  geomesa.gdelt.GDELTIngest
>                 -instanceId ntc-irad
>    -zookeepers localhost:2181                         -user root
> -password (r00t)                         -auths GDELT
>                    -tableName gdelt -featureName event
>    -ingestFile hdfs:///gdelt/uncompressed/gdelt.tsv
> 
> -----------------------------------------------------
> 
> 14/05/08 13:39:09 INFO HSQLDB45DD8FA39A.ENGINE: dataFileCache open
> start
> 
> 14/05/08 13:39:09 INFO HSQLDB45DD8FA39A.ENGINE: Checkpoint start
> 
> 14/05/08 13:39:09 INFO HSQLDB45DD8FA39A.ENGINE: Checkpoint end
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
> GMT
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:host.name=localhost
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.7.0_55
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=Oracle Corporation
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=/usr/local/hadoop-2.4.0/etc/hadoop:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop-2.4.0/contrib/capacity-scheduler/*.jar
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=/usr/local/hadoop-2.4.0/lib/native
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.32-431.11.2.el6.x86_64
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:user.name=barnhartr
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/barnhartr
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/usr/local/geomesa-gdelt-master
> 
> 14/05/08 13:39:11 INFO zookeeper.ZooKeeper: Initiating client
> connection, connectString=localhost:2181 sessionTimeout=30000
> watcher=org.apache.accumulo.fate.zookeeper.ZooSession
> $ZooWatcher@325c4c8
> 
> 14/05/08 13:39:11 INFO zookeeper.ClientCnxn: Opening socket connection
> to server localhost/127.0.0.1:2181. Will not attempt to authenticate
> using SASL (unknown error)
> 
> 14/05/08 13:39:11 INFO zookeeper.ClientCnxn: Socket connection
> established to localhost/127.0.0.1:2181, initiating session
> 
> 14/05/08 13:39:11 INFO zookeeper.ClientCnxn: Session establishment
> complete on server localhost/127.0.0.1:2181, sessionid =
> 0x145dd8bc67d0007, negotiated timeout = 30000
> 
> OpenJDK 64-Bit Server VM warning: You have loaded
> library /usr/local/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0 which
> might have disabled stack guard. The VM will try to fix the stack
> guard now.
> 
> It's highly recommended that you fix the library with 'execstack -c
> <libfile>', or link it with '-z noexecstack'.
> 
> 14/05/08 13:39:15 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 
> 14/05/08 13:39:18 INFO Configuration.deprecation: session.id is
> deprecated. Instead, use dfs.metrics.session-id
> 
> 14/05/08 13:39:18 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 
> 14/05/08 13:39:19 WARN mapreduce.JobSubmitter: Hadoop command-line
> option parsing not performed. Implement the Tool interface and execute
> your application with ToolRunner to remedy this.
> 
> 14/05/08 13:39:19 WARN mapreduce.JobSubmitter: No job jar file set.
> User classes may not be found. See Job or Job#setJar(String).
> 
> 14/05/08 13:39:19 INFO input.FileInputFormat: Total input paths to
> process : 1
> 
> 14/05/08 13:39:19 INFO mapreduce.JobSubmitter: number of splits:49
> 
> 14/05/08 13:39:19 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_local1091701005_0001
> 
> 14/05/08 13:39:19 WARN conf.Configuration:
> file:/hadoop/tmp/mapred/staging/barnhartr1091701005/.staging/job_local1091701005_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 
> 14/05/08 13:39:19 WARN conf.Configuration:
> file:/hadoop/tmp/mapred/staging/barnhartr1091701005/.staging/job_local1091701005_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
> 
> 14/05/08 13:39:58 INFO mapred.LocalDistributedCacheManager: Creating
> symlink: /hadoop/tmp/mapred/local/1399581560010/geomesa-gdelt-1.0-SNAPSHOT.jar <- /usr/local/geomesa-gdelt-master/geomesa-gdelt-1.0-SNAPSHOT.jar
> 
> 14/05/08 13:39:58 INFO mapred.LocalDistributedCacheManager: Localized
> hdfs://localhost:8020/tmp/geomesa-gdelt-1.0-SNAPSHOT.jar as
> file:/hadoop/tmp/mapred/local/1399581560010/geomesa-gdelt-1.0-SNAPSHOT.jar
> 
> 14/05/08 13:39:58 WARN conf.Configuration:
> file:/hadoop/tmp/mapred/local/localRunner/barnhartr/job_local1091701005_0001/job_local1091701005_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 
> 14/05/08 13:39:58 WARN conf.Configuration:
> file:/hadoop/tmp/mapred/local/localRunner/barnhartr/job_local1091701005_0001/job_local1091701005_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
> 
> 14/05/08 13:39:58 INFO mapred.LocalDistributedCacheManager:
> file:/hadoop/tmp/mapred/local/1399581560010/geomesa-gdelt-1.0-SNAPSHOT.jar/
> 
> 14/05/08 13:39:58 INFO mapreduce.Job: The url to track the job:
> http://localhost:8080/
> 
> 14/05/08 13:39:58 INFO mapreduce.Job: Running job:
> job_local1091701005_0001
> 
> 14/05/08 13:39:58 INFO mapred.LocalJobRunner: OutputCommitter set in
> config null
> 
> 14/05/08 13:39:58 INFO mapred.LocalJobRunner: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 
> 14/05/08 13:39:59 INFO mapred.LocalJobRunner: Waiting for map tasks
> 
> 14/05/08 13:39:59 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000000_0
> 
> 14/05/08 13:39:59 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6442450944
> +138958463
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:39:59 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:39:59 INFO mapreduce.Job: Job job_local1091701005_0001
> running in uber mode : false
> 
> 14/05/08 13:39:59 INFO mapreduce.Job:  map 0% reduce 0%
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:01 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000001_0
> 
> 14/05/08 13:40:01 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:0+134217728
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:01 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000002_0
> 
> 14/05/08 13:40:01 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:134217728+134217728
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:01 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:02 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000003_0
> 
> 14/05/08 13:40:02 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:268435456+134217728
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:02 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000004_0
> 
> 14/05/08 13:40:02 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:402653184+134217728
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:02 WARN impl.ThriftTransportPool: Server
> 127.0.0.1:9997:9997 (120000) had 20 failures in a short time period,
> will not complain anymore 
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:02 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000005_0
> 
> 14/05/08 13:40:02 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:536870912+134217728
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:02 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:03 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000006_0
> 
> 14/05/08 13:40:03 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:671088640+134217728
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:03 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000007_0
> 
> 14/05/08 13:40:03 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:805306368+134217728
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:03 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000008_0
> 
> 14/05/08 13:40:03 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:939524096+134217728
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:03 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000009_0
> 
> 14/05/08 13:40:03 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1073741824
> +134217728
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:03 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:04 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000010_0
> 
> 14/05/08 13:40:04 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1207959552
> +134217728
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:04 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000011_0
> 
> 14/05/08 13:40:04 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1342177280
> +134217728
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:04 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000012_0
> 
> 14/05/08 13:40:04 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1476395008
> +134217728
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:04 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000013_0
> 
> 14/05/08 13:40:04 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1610612736
> +134217728
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:04 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:05 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000014_0
> 
> 14/05/08 13:40:05 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1744830464
> +134217728
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:05 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:05 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000015_0
> 
> 14/05/08 13:40:05 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1879048192
> +134217728
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:05 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000016_0
> 
> 14/05/08 13:40:05 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2013265920
> +134217728
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:05 INFO mapreduce.Job:  map 2% reduce 0%
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:05 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000017_0
> 
> 14/05/08 13:40:05 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2147483648
> +134217728
> 
> 14/05/08 13:40:05 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:06 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000018_0
> 
> 14/05/08 13:40:06 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2281701376
> +134217728
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:06 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000019_0
> 
> 14/05/08 13:40:06 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2415919104
> +134217728
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:06 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000020_0
> 
> 14/05/08 13:40:06 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2550136832
> +134217728
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:06 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000021_0
> 
> 14/05/08 13:40:06 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2684354560
> +134217728
> 
> 14/05/08 13:40:06 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:07 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:07 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000022_0
> 
> 14/05/08 13:40:07 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2818572288
> +134217728
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:07 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000023_0
> 
> 14/05/08 13:40:07 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2952790016
> +134217728
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:07 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000024_0
> 
> 14/05/08 13:40:07 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3087007744
> +134217728
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:07 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:07 INFO mapreduce.Job:  map 6% reduce 0%
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:07 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000025_0
> 
> 14/05/08 13:40:07 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3221225472
> +134217728
> 
> 14/05/08 13:40:07 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:08 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:08 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000026_0
> 
> 14/05/08 13:40:08 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3355443200
> +134217728
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:08 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:08 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000027_0
> 
> 14/05/08 13:40:08 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3489660928
> +134217728
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:08 INFO mapreduce.Job:  map 10% reduce 0%
> 
> 14/05/08 13:40:08 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:08 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000028_0
> 
> 14/05/08 13:40:09 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3623878656
> +134217728
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000029_0
> 
> 14/05/08 13:40:09 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3758096384
> +134217728
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000030_0
> 
> 14/05/08 13:40:09 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3892314112
> +134217728
> 
> 14/05/08 13:40:09 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:09 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:09 INFO mapreduce.Job:  map 20% reduce 0%
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000031_0
> 
> 14/05/08 13:40:10 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4026531840
> +134217728
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000032_0
> 
> 14/05/08 13:40:10 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4160749568
> +134217728
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000033_0
> 
> 14/05/08 13:40:10 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4294967296
> +134217728
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:10 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:10 INFO mapreduce.Job:  map 29% reduce 0%
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:10 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000034_0
> 
> 14/05/08 13:40:11 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4429185024
> +134217728
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000035_0
> 
> 14/05/08 13:40:11 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4563402752
> +134217728
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000036_0
> 
> 14/05/08 13:40:11 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4697620480
> +134217728
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:11 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000037_0
> 
> 14/05/08 13:40:11 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4831838208
> +134217728
> 
> 14/05/08 13:40:11 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:12 INFO mapreduce.Job:  map 37% reduce 0%
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000038_0
> 
> 14/05/08 13:40:12 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4966055936
> +134217728
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000039_0
> 
> 14/05/08 13:40:12 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5100273664
> +134217728
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000040_0
> 
> 14/05/08 13:40:12 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5234491392
> +134217728
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:12 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000041_0
> 
> 14/05/08 13:40:12 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5368709120
> +134217728
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:12 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000042_0
> 
> 14/05/08 13:40:13 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5502926848
> +134217728
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:13 INFO mapreduce.Job:  map 45% reduce 0%
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000043_0
> 
> 14/05/08 13:40:13 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5637144576
> +134217728
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000044_0
> 
> 14/05/08 13:40:13 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5771362304
> +134217728
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000045_0
> 
> 14/05/08 13:40:13 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5905580032
> +134217728
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:13 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000046_0
> 
> 14/05/08 13:40:13 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6039797760
> +134217728
> 
> 14/05/08 13:40:13 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:14 INFO mapreduce.Job:  map 51% reduce 0%
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:14 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000047_0
> 
> 14/05/08 13:40:14 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6174015488
> +134217728
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:14 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:14 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:14 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1091701005_0001_m_000048_0
> 
> 14/05/08 13:40:14 INFO mapred.Task:  Using
> ResourceCalculatorProcessTree : [ ]
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Processing split:
> hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6308233216
> +134217728
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Map output collector class =
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: (EQUATOR) 0 kvi
> 26214396(104857584)
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: soft limit at 83886080
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: bufstart = 0; bufvoid =
> 104857600
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: kvstart = 26214396; length =
> 6553600
> 
> 14/05/08 13:40:14 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:14 INFO mapred.MapTask: Starting flush of map output
> 
> 14/05/08 13:40:14 INFO mapred.LocalJobRunner: map task executor
> complete.
> 
> 14/05/08 13:40:14 WARN mapred.LocalJobRunner: job_local1091701005_0001
> 
> java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 39
> 
>                 at org.apache.hadoop.mapred.LocalJobRunner
> $Job.runTasks(LocalJobRunner.java:462)
> 
>                 at org.apache.hadoop.mapred.LocalJobRunner
> $Job.run(LocalJobRunner.java:522)
> 
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 39
> 
>                 at
> geomesa.gdelt.GDELTIngestMapper.map(GDELTIngestMapper.java:60)
> 
>                 at
> geomesa.gdelt.GDELTIngestMapper.map(GDELTIngestMapper.java:27)
> 
>                 at
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 
>                 at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> 
>                 at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> 
>                 at org.apache.hadoop.mapred.LocalJobRunner$Job
> $MapTaskRunnable.run(LocalJobRunner.java:243)
> 
>                 at java.util.concurrent.Executors
> $RunnableAdapter.call(Executors.java:471)
> 
>                 at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)
> 
>                 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 
>                 at java.util.concurrent.ThreadPoolExecutor
> $Worker.run(ThreadPoolExecutor.java:615)
> 
>                 at java.lang.Thread.run(Thread.java:744)
> 
> 14/05/08 13:40:15 INFO mapreduce.Job:  map 57% reduce 0%
> 
> 14/05/08 13:40:15 INFO mapreduce.Job: Job job_local1091701005_0001
> failed with state FAILED due to: NA
> 
> 14/05/08 13:40:15 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:15 INFO mapreduce.Job: Counters: 25
> 
>                 File System Counters
> 
>                                 FILE: Number of bytes read=1100796112
> 
>                                 FILE: Number of bytes
> written=1112405102
> 
>                                 FILE: Number of read operations=0
> 
>                                 FILE: Number of large read
> operations=0
> 
>                                 FILE: Number of write operations=0
> 
>                                 HDFS: Number of bytes read=1101293020
> 
>                                 HDFS: Number of bytes
> written=1097385436
> 
>                                 HDFS: Number of read operations=1465
> 
>                                 HDFS: Number of large read
> operations=0
> 
>                                 HDFS: Number of write operations=112
> 
>                 Map-Reduce Framework
> 
>                                 Map input records=28
> 
>                                 Map output records=0
> 
>                                 Map output bytes=0
> 
>                                 Map output materialized bytes=168
> 
>                                 Input split bytes=3220
> 
>                                 Combine input records=0
> 
>                                 Spilled Records=0
> 
>                                 Failed Shuffles=0
> 
>                                 Merged Map outputs=0
> 
>                                 GC time elapsed (ms)=70902
> 
>                                 CPU time spent (ms)=0
> 
>                                 Physical memory (bytes) snapshot=0
> 
>                                 Virtual memory (bytes) snapshot=0
> 
>                                 Total committed heap usage
> (bytes)=6046191616
> 
>                 File Input Format Counters 
> 
>                                 Bytes Read=114688
> 
> Exception in thread "main" java.lang.Exception: Job failed
> 
>                 at
> geomesa.gdelt.GDELTIngest.runMapReduceJob(GDELTIngest.java:152)
> 
>                 at
> geomesa.gdelt.GDELTIngest.main(GDELTIngest.java:110)
> 
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> 
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 
>                 at java.lang.reflect.Method.invoke(Method.java:606)
> 
>                 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> 
> 14/05/08 13:40:15 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:15 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:16 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:16 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:16 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:17 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:17 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:17 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:18 INFO mapred.LocalJobRunner: map > sort
> 
> 14/05/08 13:40:18 INFO mapred.LocalJobRunner: map > sort
> 
>  
> 
> From: Chris Snider [mailto:chris.snider@xxxxxxxxxx] 
> Sent: Thursday, May 08, 2014 1:01 PM
> To: Discussions between GeoMesa committers; Barnhart, Bob M.
> Subject: RE: [geomesa-dev] Problems running GeoMesa GDELT tutorial
> 
> 
>  
> 
> Bob,
> 
>  
> 
> I think there may be a misunderstanding between the DB permissions and
> the user authorizations inherent in the Accumulo Schema.
> 
>  
> 
> In an accumulo shell, run a 
> 
> getauths –u root
> 
>  
> 
> This is the Authorizations that the connector is expecting.  
> 
>  
> 
> You can set authorizations for the root user with the following
> accumulo shell
> 
> setauths –s “MY,COMMA,DELIMITED,AUTHS” –u root
> 
>  
> 
> Use the new auths in the connection block for-auths COMMA (for
> example)
> 
>  
> 
> Chris Snider
> 
> Senior Software Engineer
> 
> Intelligent Software Solutions, Inc.
> 
> Description: Description: Description:
> cid:image001.png@01CA1F1F.CBC93990
> 
> 
>  
> 
> From:geomesa-dev-bounces@xxxxxxxxxxxxxxxx
> [mailto:geomesa-dev-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Andrew Ross
> Sent: Thursday, May 08, 2014 1:47 PM
> To: Barnhart, Bob M.; geomesa-dev@xxxxxxxxxxxxxxxx
> Subject: Re: [geomesa-dev] Problems running GeoMesa GDELT tutorial
> 
> 
>  
> 
> Hi Bob,
> 
> Thank you so much for your interest in Geomesa.
> 
> I'll check into what's up with geomesa-user. In the meantime, I've
> added geomesa-dev, which has the developer team on it.
> 
> Cheers!
> 
> Andrew
> 
> On 08/05/14 21:39, Barnhart, Bob M. wrote:
> 
> 
>         I’ve run into problems trying to work through the GeoMesa
>         GDELT Analysis tutorial at
>         http://geomesa.github.io/2014/04/17/geomesa-gdelt-analysis/ .
>         I sent the following email to ‘geomesa-user@xxxxxxxxxxxxxxxx’
>         but it bounced with a non-existent user error. I don’t know if
>         there are any GeoMesa communities of interest to whom I could
>         send a description of my problem, so I’m directing my question
>         to you in hopes that you could provide a (simple) solution, or
>         direct me to a person/site where I might find an answer.
>         
>          
>         
>         I am running Accumulo 1.5.1, Hadoop 2.4.0 and Zookeeper 3.4.6.
>         
>          
>         
>         I’ve been able to load the GDELT data file
>         http://data.gdeltproject.org/events/GDELT.MASTERREDUCEDV2.1979-2013.zip into Hadoop and am trying to ingest this data into Accumulo using the Hadoop command in the tutorial.
>         
>          
>         
>         As shown in the execution trace below, the ingest process
>         fails with the error:
>         
>          
>         
>         java.lang.Exception: java.lang.RuntimeException:
>         org.apache.accumulo.core.client.AccumuloSecurityException:
>         Error BAD_AUTHORIZATIONS for user root - The user does not
>         have the specified authorizations assigned
>         
>          
>         
>         I don’t know to what “specified authorizations” this error
>         might be referring. As shown below, the Accumulo “root” user
>         has all possible System.* and Table.* permissions, including
>         the ‘gdelt’ table:
>         
>          
>         
>         $ accumulo shell –u root
>         
>         root@ntc-irad> userpermissions -u root
>         
>         System permissions: System.GRANT, System.CREATE_TABLE,
>         System.DROP_TABLE, System.ALTER_TABLE, System.CREATE_USER,
>         System.DROP_USER, System.ALTER_USER, System.SYSTEM
>         
>          
>         
>         Table permissions (!METADATA): Table.READ, Table.ALTER_TABLE
>         
>         Table permissions (gdelt): Table.READ, Table.WRITE,
>         Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT,
>         Table.DROP_TABLE
>         
>         Table permissions (trace): Table.READ, Table.WRITE,
>         Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT,
>         Table.DROP_TABLE
>         
>          
>         
>         I would be grateful for any assistance in getting the GDELT
>         data ingesting into Accumulo so that I could complete my
>         assessment of GeoMesa via the GDELT tutorial.
>         
>          
>         
>         Best regards,
>         
>         Bob Barnhart
>         
>         Chief Systems Engineer | 858 826 5596 (Office) | 619 972 9489
>         (Mobile) | barnhartr@xxxxxxxxxx 
>         
>          
>         
>         -----------------------------------------------------
>         
>         Running:   hadoop jar ./target/geomesa-gdelt-1.0-SNAPSHOT.jar
>                        
>         
>         geomesa.gdelt.GDELTIngest
>                            
>         
>         -instanceId ntc-irad                            
>         
>         -zookeepers 127.0.0.1                      
>         
>         -user root -password (r00t)           
>         
>          -auths
>         Table.READ,Table.WRITE,Table.BULK_IMPORT,Table.ALTER_TABLE,Table.GRANT,Table.DROP_TABLE       
>         
>          -tableName gdelt -featureName event                  
>         
>          -ingestFile hdfs:///gdelt/uncompressed/gdelt.tsv
>         
>         -----------------------------------------------------
>         
>         14/05/08 11:52:58 INFO HSQLDB45DD2E6EE0.ENGINE: dataFileCache
>         open start
>         
>         14/05/08 11:52:59 INFO HSQLDB45DD2E6EE0.ENGINE: Checkpoint
>         start
>         
>         14/05/08 11:52:59 INFO HSQLDB45DD2E6EE0.ENGINE: Checkpoint end
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:zookeeper.version=3.4.5-1392090, built on
>         09/30/2012 17:52 GMT
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:host.name=localhost
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.version=1.7.0_55
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.vendor=Oracle Corporation
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.class.path=/usr/local/hadoop-2.4.0/etc/hadoop:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.4.0/share/had oop/commo n/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/lo cal/hadoo p-2.4.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.4.0 /share/ha doop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.4.0/share/hadoo p/common/ hadoop-nfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/log4 j-1.2.17. jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/g uava-11.0 .2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib /jersey-j son-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-lan g-2.6.jar :/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local /hadoop-2 .4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.4.0/share/h adoop/map reduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.4.0/share/h adoop/map reduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop-2.4.0/contrib/capacity-scheduler/*.jar
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.library.path=/usr/local/hadoop-2.4.0/lib/native
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.io.tmpdir=/tmp
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:java.compiler=<NA>
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:os.name=Linux
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:os.arch=amd64
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:os.version=2.6.32-431.11.2.el6.x86_64
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:user.name=barnhartr
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:user.home=/home/barnhartr
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Client
>         environment:user.dir=/usr/local/geomesa-gdelt-master
>         
>         14/05/08 11:53:01 INFO zookeeper.ZooKeeper: Initiating client
>         connection, connectString=127.0.0.1 sessionTimeout=30000
>         watcher=org.apache.accumulo.fate.zookeeper.ZooSession
>         $ZooWatcher@1ec896d2
>         
>         14/05/08 11:53:01 INFO zookeeper.ClientCnxn: Opening socket
>         connection to server localhost/127.0.0.1:2181. Will not
>         attempt to authenticate using SASL (unknown error)
>         
>         14/05/08 11:53:01 INFO zookeeper.ClientCnxn: Socket connection
>         established to localhost/127.0.0.1:2181, initiating session
>         
>         14/05/08 11:53:01 INFO zookeeper.ClientCnxn: Session
>         establishment complete on server localhost/127.0.0.1:2181,
>         sessionid = 0x145dc8e7394000e, negotiated timeout = 30000
>         
>         OpenJDK 64-Bit Server VM warning: You have loaded
>         library /usr/local/hadoop-2.4.0/lib/native/libhadoop.so.1.0.0
>         which might have disabled stack guard. The VM will try to fix
>         the stack guard now.
>         
>         It's highly recommended that you fix the library with
>         'execstack -c <libfile>', or link it with '-z noexecstack'.
>         
>         14/05/08 11:53:05 WARN util.NativeCodeLoader: Unable to load
>         native-hadoop library for your platform... using builtin-java
>         classes where applicable
>         
>         14/05/08 11:53:08 INFO Configuration.deprecation: session.id
>         is deprecated. Instead, use dfs.metrics.session-id
>         
>         14/05/08 11:53:08 INFO jvm.JvmMetrics: Initializing JVM
>         Metrics with processName=JobTracker, sessionId=
>         
>         14/05/08 11:53:08 WARN mapreduce.JobSubmitter: Hadoop
>         command-line option parsing not performed. Implement the Tool
>         interface and execute your application with ToolRunner to
>         remedy this.
>         
>         14/05/08 11:53:08 WARN mapreduce.JobSubmitter: No job jar file
>         set.  User classes may not be found. See Job or
>         Job#setJar(String).
>         
>         14/05/08 11:53:08 INFO input.FileInputFormat: Total input
>         paths to process : 1
>         
>         14/05/08 11:53:08 INFO mapreduce.JobSubmitter: number of
>         splits:49
>         
>         14/05/08 11:53:09 INFO mapreduce.JobSubmitter: Submitting
>         tokens for job: job_local422695915_0001
>         
>         14/05/08 11:53:09 WARN conf.Configuration:
>         file:/hadoop/tmp/mapred/staging/barnhartr422695915/.staging/job_local422695915_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>         
>         14/05/08 11:53:09 WARN conf.Configuration:
>         file:/hadoop/tmp/mapred/staging/barnhartr422695915/.staging/job_local422695915_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
>         
>         14/05/08 11:53:47 INFO mapred.LocalDistributedCacheManager:
>         Creating
>         symlink: /hadoop/tmp/mapred/local/1399575189575/geomesa-gdelt-1.0-SNAPSHOT.jar <- /usr/local/geomesa-gdelt-master/geomesa-gdelt-1.0-SNAPSHOT.jar
>         
>         14/05/08 11:53:47 INFO mapred.LocalDistributedCacheManager:
>         Localized
>         hdfs://localhost:8020/tmp/geomesa-gdelt-1.0-SNAPSHOT.jar as
>         file:/hadoop/tmp/mapred/local/1399575189575/geomesa-gdelt-1.0-SNAPSHOT.jar
>         
>         14/05/08 11:53:47 WARN conf.Configuration:
>         file:/hadoop/tmp/mapred/local/localRunner/barnhartr/job_local422695915_0001/job_local422695915_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
>         
>         14/05/08 11:53:47 WARN conf.Configuration:
>         file:/hadoop/tmp/mapred/local/localRunner/barnhartr/job_local422695915_0001/job_local422695915_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
>         
>         14/05/08 11:53:47 INFO mapred.LocalDistributedCacheManager:
>         file:/hadoop/tmp/mapred/local/1399575189575/geomesa-gdelt-1.0-SNAPSHOT.jar/
>         
>         14/05/08 11:53:47 INFO mapreduce.Job: The url to track the
>         job: http://localhost:8080/
>         
>         14/05/08 11:53:47 INFO mapreduce.Job: Running job:
>         job_local422695915_0001
>         
>         14/05/08 11:53:47 INFO mapred.LocalJobRunner: OutputCommitter
>         set in config null
>         
>         14/05/08 11:53:47 INFO mapred.LocalJobRunner: OutputCommitter
>         is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>         
>         14/05/08 11:53:47 INFO mapred.LocalJobRunner: Waiting for map
>         tasks
>         
>         14/05/08 11:53:47 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000000_0
>         
>         14/05/08 11:53:47 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:47 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6442450944
>         +138958463
>         
>         14/05/08 11:53:47 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:48 INFO mapreduce.Job: Job
>         job_local422695915_0001 running in uber mode : false
>         
>         14/05/08 11:53:48 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:48 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:48 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:48 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:48 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:48 INFO mapreduce.Job:  map 0% reduce 0%
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:49 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000001_0
>         
>         14/05/08 11:53:49 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:0+134217728
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:49 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:49 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:49 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000002_0
>         
>         14/05/08 11:53:49 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:134217728
>         +134217728
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:49 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:49 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:49 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000003_0
>         
>         14/05/08 11:53:49 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:268435456
>         +134217728
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:49 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:49 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:49 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000004_0
>         
>         14/05/08 11:53:49 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:402653184
>         +134217728
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:49 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:49 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:49 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000005_0
>         
>         14/05/08 11:53:49 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:536870912
>         +134217728
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:49 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:49 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:49 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000006_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:671088640
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:50 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:50 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000007_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:805306368
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:50 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:50 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000008_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:939524096
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:50 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:50 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000009_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1073741824
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:50 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:50 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000010_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1207959552
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:50 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:50 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000011_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1342177280
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:50 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:50 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:50 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000012_0
>         
>         14/05/08 11:53:50 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1476395008
>         +134217728
>         
>         14/05/08 11:53:50 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:51 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:51 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:51 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000013_0
>         
>         14/05/08 11:53:51 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1610612736
>         +134217728
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:51 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:51 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:51 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000014_0
>         
>         14/05/08 11:53:51 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1744830464
>         +134217728
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:51 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:51 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:51 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000015_0
>         
>         14/05/08 11:53:51 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:1879048192
>         +134217728
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:51 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:51 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:51 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000016_0
>         
>         14/05/08 11:53:51 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2013265920
>         +134217728
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:51 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:51 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:51 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000017_0
>         
>         14/05/08 11:53:51 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2147483648
>         +134217728
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:51 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:51 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:51 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000018_0
>         
>         14/05/08 11:53:51 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2281701376
>         +134217728
>         
>         14/05/08 11:53:51 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:52 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:52 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:52 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000019_0
>         
>         14/05/08 11:53:52 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2415919104
>         +134217728
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:52 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:52 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:52 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000020_0
>         
>         14/05/08 11:53:52 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2550136832
>         +134217728
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:52 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:52 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:52 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000021_0
>         
>         14/05/08 11:53:52 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2684354560
>         +134217728
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:52 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:52 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:52 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000022_0
>         
>         14/05/08 11:53:52 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2818572288
>         +134217728
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:52 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:52 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:52 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000023_0
>         
>         14/05/08 11:53:52 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:2952790016
>         +134217728
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:52 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:52 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:52 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000024_0
>         
>         14/05/08 11:53:52 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3087007744
>         +134217728
>         
>         14/05/08 11:53:52 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:53 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:53 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:53 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000025_0
>         
>         14/05/08 11:53:53 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3221225472
>         +134217728
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:53 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:53 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:53 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000026_0
>         
>         14/05/08 11:53:53 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3355443200
>         +134217728
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:53 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:53 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:53 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000027_0
>         
>         14/05/08 11:53:53 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3489660928
>         +134217728
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:53 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:53 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:53 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000028_0
>         
>         14/05/08 11:53:53 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3623878656
>         +134217728
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:53 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:53 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:53 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000029_0
>         
>         14/05/08 11:53:53 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3758096384
>         +134217728
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:53 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:53 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:53 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000030_0
>         
>         14/05/08 11:53:53 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:3892314112
>         +134217728
>         
>         14/05/08 11:53:53 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:54 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000031_0
>         
>         14/05/08 11:53:54 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4026531840
>         +134217728
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:54 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000032_0
>         
>         14/05/08 11:53:54 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4160749568
>         +134217728
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:54 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000033_0
>         
>         14/05/08 11:53:54 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4294967296
>         +134217728
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:54 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000034_0
>         
>         14/05/08 11:53:54 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4429185024
>         +134217728
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:54 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000035_0
>         
>         14/05/08 11:53:54 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4563402752
>         +134217728
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:54 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000036_0
>         
>         14/05/08 11:53:54 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4697620480
>         +134217728
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:54 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:54 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:54 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:55 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000037_0
>         
>         14/05/08 11:53:55 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4831838208
>         +134217728
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:55 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:55 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:55 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000038_0
>         
>         14/05/08 11:53:55 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:4966055936
>         +134217728
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:55 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:55 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:55 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000039_0
>         
>         14/05/08 11:53:55 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5100273664
>         +134217728
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:55 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:55 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:55 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000040_0
>         
>         14/05/08 11:53:55 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5234491392
>         +134217728
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:55 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:55 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:55 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000041_0
>         
>         14/05/08 11:53:55 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5368709120
>         +134217728
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:55 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:55 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:55 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000042_0
>         
>         14/05/08 11:53:55 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5502926848
>         +134217728
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:55 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:55 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:55 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:56 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000043_0
>         
>         14/05/08 11:53:56 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5637144576
>         +134217728
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:56 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:56 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:56 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000044_0
>         
>         14/05/08 11:53:56 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5771362304
>         +134217728
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:56 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:56 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:56 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000045_0
>         
>         14/05/08 11:53:56 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:5905580032
>         +134217728
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:56 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:56 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:56 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000046_0
>         
>         14/05/08 11:53:56 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6039797760
>         +134217728
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:56 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:56 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:56 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000047_0
>         
>         14/05/08 11:53:56 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6174015488
>         +134217728
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:56 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:56 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:56 INFO mapred.LocalJobRunner: Starting task:
>         attempt_local422695915_0001_m_000048_0
>         
>         14/05/08 11:53:56 INFO mapred.Task:  Using
>         ResourceCalculatorProcessTree : [ ]
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Processing split:
>         hdfs://localhost:8020/gdelt/uncompressed/gdelt.tsv:6308233216
>         +134217728
>         
>         14/05/08 11:53:56 INFO mapred.MapTask: Map output collector
>         class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>         
>         14/05/08 11:53:57 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:57 INFO mapred.MapTask: (EQUATOR) 0 kvi
>         26214396(104857584)
>         
>         14/05/08 11:53:57 INFO mapred.MapTask:
>         mapreduce.task.io.sort.mb: 100
>         
>         14/05/08 11:53:57 INFO mapred.MapTask: soft limit at 83886080
>         
>         14/05/08 11:53:57 INFO mapred.MapTask: bufstart = 0; bufvoid =
>         104857600
>         
>         14/05/08 11:53:57 INFO mapred.MapTask: kvstart = 26214396;
>         length = 6553600
>         
>         14/05/08 11:53:57 INFO mapred.MapTask: Starting flush of map
>         output
>         
>         14/05/08 11:53:57 INFO mapred.LocalJobRunner: map task
>         executor complete.
>         
>         14/05/08 11:53:57 WARN mapred.LocalJobRunner:
>         job_local422695915_0001
>         
>         java.lang.Exception: java.lang.RuntimeException:
>         org.apache.accumulo.core.client.AccumuloSecurityException:
>         Error BAD_AUTHORIZATIONS for user root - The user does not
>         have the specified authorizations assigned
>         
>                         at org.apache.hadoop.mapred.LocalJobRunner
>         $Job.runTasks(LocalJobRunner.java:462)
>         
>                         at org.apache.hadoop.mapred.LocalJobRunner
>         $Job.run(LocalJobRunner.java:522)
>         
>         Caused by: java.lang.RuntimeException:
>         org.apache.accumulo.core.client.AccumuloSecurityException:
>         Error BAD_AUTHORIZATIONS for user root - The user does not
>         have the specified authorizations assigned
>         
>                         at
>         org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.hasNext(TabletServerBatchReaderIterator.java:210)
>         
>                         at geomesa.core.data.AccumuloDataStore
>         $$anonfun$readMetadataItem
>         $1.apply(AccumuloDataStore.scala:169)
>         
>                         at geomesa.core.data.AccumuloDataStore
>         $$anonfun$readMetadataItem
>         $1.apply(AccumuloDataStore.scala:157)
>         
>                         at scala.collection.MapLike
>         $class.getOrElse(MapLike.scala:128)
>         
>                         at
>         scala.collection.AbstractMap.getOrElse(Map.scala:58)
>         
>                         at
>         geomesa.core.data.AccumuloDataStore.readMetadataItem(AccumuloDataStore.scala:157)
>         
>                         at
>         geomesa.core.data.AccumuloDataStore.getAttributes(AccumuloDataStore.scala:220)
>         
>                         at
>         geomesa.core.data.AccumuloDataStore.getSchema(AccumuloDataStore.scala:267)
>         
>                         at
>         geomesa.gdelt.GDELTIngestMapper.setup(GDELTIngestMapper.java:53)
>         
>                         at
>         org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
>         
>                         at
>         org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>         
>                         at
>         org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>         
>                         at org.apache.hadoop.mapred.LocalJobRunner$Job
>         $MapTaskRunnable.run(LocalJobRunner.java:243)
>         
>                         at java.util.concurrent.Executors
>         $RunnableAdapter.call(Executors.java:471)
>         
>                         at
>         java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         
>                         at
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         
>                         at java.util.concurrent.ThreadPoolExecutor
>         $Worker.run(ThreadPoolExecutor.java:615)
>         
>                         at java.lang.Thread.run(Thread.java:744)
>         
>         Caused by:
>         org.apache.accumulo.core.client.AccumuloSecurityException:
>         Error BAD_AUTHORIZATIONS for user root - The user does not
>         have the specified authorizations assigned
>         
>                         at
>         org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:701)
>         
>                         at
>         org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:361)
>         
>                         at
>         org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>         
>                         at
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         
>                         at java.util.concurrent.ThreadPoolExecutor
>         $Worker.run(ThreadPoolExecutor.java:615)
>         
>                         at
>         org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>         
>                         at
>         org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         
>                         ... 1 more
>         
>         Caused by: ThriftSecurityException(user:root,
>         code:BAD_AUTHORIZATIONS)
>         
>                         at
>         org.apache.accumulo.core.tabletserver.thrift.TabletClientService$startMultiScan_result$startMultiScan_resultStandardScheme.read(TabletClientService.java:8165)
>         
>                         at
>         org.apache.accumulo.core.tabletserver.thrift.TabletClientService$startMultiScan_result$startMultiScan_resultStandardScheme.read(TabletClientService.java:8142)
>         
>                         at
>         org.apache.accumulo.core.tabletserver.thrift.TabletClientService$startMultiScan_result.read(TabletClientService.java:8081)
>         
>                         at
>         org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>         
>                         at
>         org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startMultiScan(TabletClientService.java:294)
>         
>                         at
>         org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:274)
>         
>                         at
>         org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:644)
>         
>                         ... 7 more
>         
>         14/05/08 11:53:57 WARN impl.TabletServerBatchReader:
>         TabletServerBatchReader not shutdown; did you forget to call
>         close()?
>         
>         14/05/08 11:53:58 INFO mapreduce.Job: Job
>         job_local422695915_0001 failed with state FAILED due to: NA
>         
>         14/05/08 11:53:58 INFO mapreduce.Job: Counters: 0
>         
>         Exception in thread "main" java.lang.Exception: Job failed
>         
>                         at
>         geomesa.gdelt.GDELTIngest.runMapReduceJob(GDELTIngest.java:152)
>         
>                         at
>         geomesa.gdelt.GDELTIngest.main(GDELTIngest.java:110)
>         
>                         at
>         sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         
>                         at
>         sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         
>                         at
>         sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         
>                         at
>         java.lang.reflect.Method.invoke(Method.java:606)
>         
>                         at
>         org.apache.hadoop.util.RunJar.main(RunJar.java:212)
>         
> 
>  
> 
> 
> _______________________________________________
> geomesa-dev mailing list
> geomesa-dev@xxxxxxxxxxxxxxxx
> http://locationtech.org/mailman/listinfo/geomesa-dev



Back to the top