Hi Jessica,
I believe that the producerFS is only used to write features, so
calling getFeatures on it will never return anything. The features
that you write with the producerFS should be visible if you call
getFeatures on the consumerFS. However if you are not getting any
features consumed on the console then likely something is wrong.
You could try checking in your kafka logs, and also check for the
topics created by geomesa, which you can consume using the kafka
command line tools (you won't be able to easily deserialize the
messages but it might indicate if they were written at all).
Hope that helps,
Emilio
On 05/03/2016 05:55 AM, yingxin li
wrote:
Hi Emilio,
Sorry to bother you again.
I got a problem when I tried the kafka datastore. When I
debug the KafkaQuickStart tutorial, the producerDS and
consumerDS can be created successfully. But after the
statement "producerFS.addFeatures(featureCollection);", the
producerFS.getFeatures() returns null. It seems the
producerFS.addFeatures cannot add successfully, but I did not
get any exception. The log says "Consuming with the live
consumer...
0 features were written to Kafka".
My parameters are as below:
KAFKA_BROKER_PARAM,
"dsj-s4:9092,dsj-s3:9092,dsj-s5:9092,dsj-s6:9092,dsj-s1:9092"
ZOOKEEPERS_PARAM,
"dsj-s3:2181,dsj-s13:2181,dsj-s1:2181,dsj-s11:2181,dsj-s6:2181"
ZK_PATH, "/kafka"
Thanks & Regards,
Jessica
Hi Jessica,
DWithin always seems to operate in degrees. That said,
we have some code for handling different units that you
might find useful:
Note that conversions between degrees and meters might
not be entirely accurate depending on your latitude and
polygon.
Hope that helps,
Emilio
On Wed, 2016-04-27 at 08:42 +0000, yingxin li wrote:
Hi Emilio,
For the DWITHIN function in cql, the unit is degree,
not meters. My CoordinateReferenceSystem is EPSG:4326.
For example, we'd like to get all locations whose
distance with one specific polygon is less than 10km
and their distance. Do you have any suggestions on
this? Thanks in advance!
Best Regards,
Jessica
Hi Emilio,
Thank you so much!
It is very faster now using Filter.INCLUDE,
only takes 5 seconds :)
Best Regards,
Jessica
Hi Jessica,
Our accumulo data store is definitely the
most robust. Our HBase store is a prototype,
but I wouldn't expect it to take 40
minutes...
Most likely it is due to the large time
range you're querying. Our index is split
based on a week interval, so your query
below would result in 300+ sequential scans.
It might prove faster in that case to
query for Filter.INCLUDE, and then manually
apply the precise filter to the results like
so:
Filter filter = ECQL.toFilter(cql);
Query query = new Query(typeName,
Filter.INCLUDE);
FeatureReader fr =
ds.getFeatureReader(query,
Transaction.AUTO_COMMIT);
while (fr.hasNext()) {
SimpleFeature
elem = fr.next();
if
(filter.evaluate(elem)) {
...
}
}
If you have the option, the accumulo data
store is likely to work much better for a
complex system.
Thanks,
Emilio
On Tue, 2016-04-26 at 10:04 +0000,
yingxin li wrote:
Hi Emilio,
Did you ever compare the performance
between accumulo datastore and hbase
datastore? I added 163000 features
into hbase data store, and use below
code to query, it took almost 40
minutes in our test environment.
When we use accumulo as datastore,
it only took 10 seconds. Is anything
wrong with my below code?
String cql="DWITHIN(geom, POINT(103
1), 10, kilometers) and recordtime
DURING
2010-01-01T01:00:00.000Z/2016-05-01T23:59:59.000Z";
Query query =new
Query(typeName,
ECQL.toFilter(cql));
FeatureReader<SimpleFeatureType,
SimpleFeature> fr =
ds.getFeatureReader(query,Transaction.AUTO_COMMIT);
int row=0;
while(fr.hasNext()){
SimpleFeature
elem=fr.next();
row++;
//
System.out.println(elem.getAttributes().toString());
}
System.out.println(new
Date());
System.out.println(row+"
locations found.");
This is how I added features:
SimpleFeatureCollection
featureCollection = new
ListFeatureCollection(sft,list);
((ContentFeatureStore)
fs).addFeatures(featureCollection);
Thanks,
Jessica
Hi Emilio,
In last email I got error
"Column family M does not
exist in region", then I
changed my column family
name to "M" and run again,
now I got another error:
Exception in thread "main"
java.io.IOException: Schema
'polygon' does not exist.
at
org.geotools.data.store.ContentDataStore.ensureEntry(ContentDataStore.java:621)
at
org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:393)
at
org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:376)
at
org.geotools.data.store.ContentDataStore.getFeatureReader(ContentDataStore.java:425)
And this is my code:
String
tablename="cdb_vessel:vessel_polygon_test";
Map map = new
HashMap();
map.put(
"bigtable.table.name",
tablename);
DataStore ds =
DataStoreFinder.getDataStore(map
);
String
typeName="polygon";
String cql =
"bbox(polygon_new,-180,-90,180,90)";
Query query =new
Query(typeName,
ECQL.toFilter(cql));
FeatureReader fr =
ds.getFeatureReader(query,Transaction.AUTO_COMMIT);
ArrayList<SimpleFeature>
features = new
ArrayList<SimpleFeature>();
for(int
i=0;i<10;i++){
features.add((SimpleFeature)
fr.next());
}
What the typeName should
be? Shall I define a type in
a config? Thanks in advance!
Regards,
Jessica
Hi Emilio,
Thank you very much
for your quick
response.
I can get the
datastore now, but
when I try to query
data from the hbase
table, I got below
error:
Caused by:
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException):
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException:
Column family M does
not exist in region
cdb_vessel:latest_location,,1453259842944.0365e0ba025cfecd5d37e66ed47a8ef6.
in table
'cdb_vessel:latest_location',
{NAME => 'details',
DATA_BLOCK_ENCODING
=> 'NONE',
BLOOMFILTER =>
'ROW',
REPLICATION_SCOPE
=> '0', VERSIONS
=> '1', COMPRESSION
=> 'SNAPPY',
MIN_VERSIONS =>
'0', TTL =>
'FOREVER',
KEEP_DELETED_CELLS
=> 'FALSE',
BLOCKSIZE =>
'65536', IN_MEMORY
=> 'false',
BLOCKCACHE =>
'true'}
at
org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:6916)
at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2385)
at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2365)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2115)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31305)
at
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
at
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at
java.lang.Thread.run(Thread.java:745)
at
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
at
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31751)
at
org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:332)
Could you please help
on this?
By the way, is the
typeName="testsft"
the column family name
in
https://github.com/locationtech/geomesa/blob/master/geomesa-hbase/geomesa-hbase-datastore/src/test/scala/org/locationtech/geomesa/hbase/data/HBaseIntegrationTest.scala?
Thanks,
Jessica
|
github.com
geomesa -
GeoMesa is a
suite of tools
for working
with big
geo-spatial
data in a
distributed
fashion.
|
Hi Jessica,
Our HBase
support is new,
and we haven't
documented it very
well, so sorry for
that. In order to
use it, you would
need the
'geomesa-hbase-datastore'
jar on your
classpath, which
is available here:
(if you're not
using maven you
will need the
dependent jars as
well, which are
described in the
pom).
You also need
to have your
hbase-site.xml on
your classpath.
Once you have
those two things,
you can get an
HBase data store
using the GeoTools
DataStoreFinder,
passing in a map
with the
parameter "bigtable.table.name"
and the name of
the table you want
to store your data
in.
If you are
using GeoServer,
we have a module
you can build to
install there as
well. It looks
like currently
that is not being
published, but you
can build it from
source following
the readme here:
Thanks,
Emilio
On Tue,
2016-04-19 at
08:44 +0000,
yingxin li wrote:
Dear all,
We decided
to use GeoMesa
to process our
spatial data.
While we are
using HBase to
store data.
And in the
GeoMesa
manual, I
didnot find
how to install
GeoMesa with
HBase. Could
you please
help on this?
It will be
very
appreciated
for any help.
Best
Regards,
Jessica
_______________________________________________
geomesa-dev mailing list
geomesa-dev@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
http://locationtech.org/mailman/listinfo/geomesa-dev
_______________________________________________
geomesa-dev mailing list
geomesa-dev@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
http://locationtech.org/mailman/listinfo/geomesa-dev
_______________________________________________
geomesa-dev mailing list
geomesa-dev@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
http://locationtech.org/mailman/listinfo/geomesa-dev
_______________________________________________
geomesa-dev mailing list
geomesa-dev@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
http://locationtech.org/mailman/listinfo/geomesa-dev
|