AeroGear Crypto Java 0.1.2 released
by Bruno Oliveira
Good morning, just to let you know we released today the bits for digital signatures and some bug fixes.
Have a happy new year sweet hearts.
--
abstractj
2 years, 7 months
Simplify the metrics for sanity
by Matthias Wessendorf
Hi,
we do have a problem w/ our current metrics processing. It's complicated
(lot's of CDI events and two different JMS messaging approaches...) and
also slow (JPQL/JDBC) and it does consume a lot of memory and processing
time. This is leading to bugs (incorrect stats) and eventually causes down
times, due to heavy processing.
I'd like to dramatically simplify our metrics processing... to something
like:
Success -> could connect to 3rd party, to deliver tokens
Failure -> something went wrong when talking to 3rd party service.
Right now we do have metrics on push delivery:
Pending -> the submission to the 3rd party provider is in flight
Success -> we were able to connect, and could deliver *something*
Failure -> something obvious, like invalid certificate (APNs), no
connection to 3rd party possible, etc
Besides that, we also do a count on targeted devices. I think there is not
really a huge value. For instance if APNs rejects some tokens, we do not
track those, we just show how many tokens our DB did find, not more. We
don't show any of real interest. We could improve this (see below), but I
doubt that the current implementation is able to handle this well.
Also, on Android/FCM the numbers are even worse. We do, internally,
leverage their topics, so we usually end up sending exactly one push to
FCM, regardless of how many Android device-tokens we have in the DB. The
counter says 1 (one), because the server did target one topic (not n
devices).
So, for now, I'd like to dramatically simplify the code, and go with the
above Success/Failure solution.
However, I honestly think in the long run, we should get something
pluggable, that allows us to process the metrics independently, outside of
the UPS code base. I think my previous Kafka mail is addressing this
partially: The actual response and details about the push job should be
logged to some Kafka system, and an independent process should be able to
process those.
This will give us much more freedom and flexibility. Perhaps also, in the
future, we want some different stats, and something like Prometheus
/Grafana:
https://prometheus.io/docs/visualization/grafana/
A more flexible system, with independent metrics 'calculation' processing
will help us here.
Any thoughts?
-Matthias
--
Matthias Wessendorf
blog: http://matthiaswessendorf.wordpress.com/
twitter: http://twitter.com/mwessendorfa
7 years, 7 months
FCM topic: use on alias as well ?
by Matthias Wessendorf
Hi,
on FCM related push, we do, in our client SDK, automatically subscribe a
client to an annoymous topic, matching our immutable variant ID.
If users are specifying categories, we do map those into topics as well.
This is the related code in our Android SDK:
https://github.com/aerogear/aerogear-android-push/blob/master/aerogear-an...
How do people feel about doing that for the alias as well ?
In the past we did not do it, since topics used to be a more restricted
resource. Remember, the first notion of topics (GCM v3, at that time) were
even limiting the number of max. subscribers?
However, that changed, and I think it would be nice if we just use the
topics for each alias of the app as well. This would speed up the time to
deliver the push request to the FCM backend, since the UPS would no longer
need to look up the device, a push, regardless how many devices, means one
small HTTP to Google, per alias (aka topic)
Any thoughts ?
NOTE: There is a general limit of topic abuse, but that's on the app
instance (see [1]), so our APP Developers need to make sure they don't go
crazy w/ a gazillion of categories ;-)
-Matthias
[1] https://firebase.google.com/docs/cloud-messaging/admin/errors
--
Matthias Wessendorf
blog: http://matthiaswessendorf.wordpress.com/
twitter: http://twitter.com/mwessendorf
7 years, 7 months
Google summer of code: WELCOME Dimitra and Polina
by Matthias Wessendorf
Hello AeroGear community,
as the AeroGear project lead, I am thrilled to announce that this year we
do have two students working on the AeroGear UnifiedPush server, during the
Google Summer of Code Projects.
Both students applied to the "Apache Kafka and Apache HBase" improvements
project, and did deliver a great paper and detailed implementation plan.
Over the summer Dimitra and Polina will work with us, in the open, in the
community on this project.
The main access point for our community is listed here. Most of us do hang
around on IRC, and of course, if there are questions, please send mails to
our mailing list.
Details:
https://aerogear.org/community/
Dimitra, and Polina, I'd like to ask you to introduce yourself here on this
mailing list (remember to subscribe for it), so that the larger AeroGear
community gets to know you better!
Cheers,
Matthias
--
Matthias Wessendorf
blog: http://matthiaswessendorf.wordpress.com/
twitter: http://twitter.com/mwessendorf
7 years, 7 months
Openshift UPS template
by Dimitra Zuccarelli
Hi,
I've just had to wipe my computer so I'm in the process of trying to get
Openshift back up and running with the UPS template.
I can't seem to get it working because the MySQL pods consistently fail
with the error:
[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use
--explicit_defaults_for_timestamp server option (see documentation for more
details).
[ERROR] The server option 'lower_case_table_names' is configured to use
case sensitive table names but the data directory is on a case-insensitive
file system which is an unsupported combination. Please consider either
using a case sensitive file system for your data directory or switching to
a case-insensitive table name mode.
[ERROR] Aborting
I've set the env variable LOWER_CASE_TABLE_NAMES to 2 (
https://hub.docker.com/r/centos/mysql-57-centos7/) but I'm still getting
the same error.
Does anyone have any insight on how to fix this?
Thanks!
Dimitra
7 years, 7 months
Notification Delivery metrics and processing with Kafka
by Matthias Wessendorf
Hi,
with the new APNs HTTP/2 APIs, and our usage of Pushy, we are able to get a
way more finegrain knowledge if Apple did accept (for further processing)
or reject a messages, on a per device_token level!
For instance, if we have a push with 5000 targeted devices, we are now able
to say that 5 tokens, for instances failed, but APNs was happy to accept
push request for the other 4995 devices (Note: this does NOT mean they
actually arrive at the device, just that apple accepted them for further
processing).
Now, this, for APNs, gives us much more flexiblity handling our metrics!
In our code, here, we do read *each* token request from APNs in here:
https://github.com/aerogear/aerogear-unifiedpush-server/blob/20831d961966...
So here, we could simply send the result, on a per token base, to a (Kafka)
topic, like:
...
if (pushNotificationResponse.isAccepted()) {
logger.trace("Push notification for '{}' (payload={})", deviceToken,
pushNotificationResponse.getPushNotification().getPayload());
producer.send(jobID, "Success"); // sends to "push_messages" topic
} else {
final String rejectReason = pushNotificationResponse.getRejectionReason();
logger.trace("Push Message has been rejected with reason: {}", rejectReason);
producer.send(jobID, "Rejected"); // sends "push_messages" topic
...
}
Now, this sends all to one topic, and we could be using, somewhere, Kafka
Stream API, to perform some processing of the source, and calculate some
stats on that, like:
KStreamBuilder builder = new KStreamBuilder();
// read from the topic that contains all messages, for all jobs
final KStream<String, String> source = builder.stream("push_messages");
// some simple processing, and grouping by key, applying a predicate
and send to three "analytic" topic:
final KTable<String, Long> successCountsPerJob = source.filter((key,
value) -> value.equals("Success"))
.groupByKey()
.count("successMessagesPerJob");
successCountsPerJob.to(Serdes.String(), Serdes.Long(), "successMessagesPerJob");
final KTable<String, Long> failCountsPerJob = source.filter((key,
value) -> value.equals("Rejected"))
.groupByKey()
.count("failedMessagesPerJob");
failCountsPerJob.to(Serdes.String(), Serdes.Long(), "failedMessagesPerJob");
source.groupByKey()
count("totalMessagesPerJob")
.to(Serdes.String(), Serdes.Long(), "totalMessagesPerJob");
The above performs some functional processing of the single source of
truth, based on different assumptions.
If one would have a simple consumer on each of these three "analytic"
topics, a simple logging output would be:
2017-05-16 13:42:48,763 INFO successMessagesPerJob: 2 - jobID: XXX
2017-05-16 13:42:48,764 INFO totalMessagesPerJob: 3 - jobID: XXX
2017-05-16 13:42:48,764 INFO failedMessagesPerJob: 1 - jobID: XXX
since for the GSoC we do have two students, working on Kafka and HBase
improvements for UPS, I wanted to share this quick prototype, as food for
thoughts.
Of course, each of these 'filtered' consumers could than eventually store
the result somewhere else.
With this approach, Kafka would be come the hub (or data pipeline) for our
metrics, with stream processing and different consumers to deal with the
results of interest
Any comments or other thoughts?
-Matthias
--
Matthias Wessendorf
blog: http://matthiaswessendorf.wordpress.com/
twitter: http://twitter.com/mwessendorf
7 years, 7 months