You're just trying to get me to resurrect my simple little parser :P
That's essentially all I did. Take a pattern, parse it and make a regex
pattern out of the format pattern. It worked fine, but exceptions were a
problem. I just had a enum[1] that represented each pattern format type
with a following regex expression. Note though I kind of suck at regex
so it might be horrible.
[1]:
Custom pattern formats could be a problem, but as long as the format
has delimiters it can be pretty reliable. As an example a category search can include the
[] delimiters, and you can back search for a timestamp to detect the record start. A stack
trace will always end before another record start.
Although who knows maybe a simple grep like behavior is good enough to start with.
On Aug 14, 2013, at 1:20 PM, "James R. Perkins" <jperkins(a)redhat.com>
wrote:
> The problem with filtering comes in parsing the log file. While I have had some
success doing this with I tool I was playing with for Jesper, once you get a stack trace
you have to start making guesses. I tend to agree with dmlloyd in that log files tend to
be a one way write. You can do some parsing with a best guess, but well we all know what
happens when we start assuming and guessing :)
>
> That said I do like the challenge of trying to parse log files. I did start a project
to do it in my spare time I just haven't found time to put into working on it lately.
>
> On 08/14/2013 11:12 AM, Jason Greene wrote:
>> We could technically support server side filtering. We simply pattern search the
file. We could also support client side filtering. We could for example utilize browser
offline storage. The big issue is just that there are efficiency limits to the size of the
file since it's not an indexed store. That could be improved by doing a real indexed
data store (e.g. something like bdb). In the future I could picture an SPI that logging
backends could implement for the purpose of searching. Although I think we will find that
simply searching the normal text log file is good enough.
>>
>> On Aug 14, 2013, at 12:55 PM, James R. Perkins <jperkins(a)redhat.com>
wrote:
>>
>>> That's my thinking too. The only complaint I've had about that
solution is that it can't be filtered since it's just raw text. I had a working
example operation that just took a file name and the number of bytes to read. To me this
is the best solution, but it wouldn't have access to anything in a syslog or the
console. Though that's okay IMO since they likely have other viewers for those.
>>>
>>> I'll resurrect that example. It shouldn't take all that long. One
thing to note is it will only allow files to be read that are in the jboss.server.log.dir
(and possibly only files that end in *.log?).
>>>
>>> So in general I agree here, I think this is the best, the least invasive
approach and the lowest performance hit.
>>>
>>> On 08/14/2013 10:36 AM, Jason Greene wrote:
>>>> IMO the best solution is a management operation that simply displays
portions of the log file. The dependency on the log file is IMO not a problem because we
support multiple appenders.
>>>>
>>>> On Aug 14, 2013, at 12:03 PM, James R. Perkins
<jperkins(a)redhat.com> wrote:
>>>>
>>>>> I had posted this to another list, but this is a more appropriate
place for it. I think there needs to be a general discussion around this as it's been
mentioned, at least to me, a few times here and there and I know Heiko raised the issue
some time a go now.
>>>>>
>>>>> The original JIRA, WFLY-280[1], is to display the last 10 error
messages only. To be honest I wouldn't find that very useful. To me if I'm looking
for logs I want to see all logs, but that's not always so easy. Like the
syslog-handler which doesn't log to a file so there is no way to read those messages
back.
>>>>>
>>>>> The current plan for the last 10 error messages is we store messages
in a queue that can be accessed via an operation. This works fine until the error
message you're interested in is 11 or you want to see warning messages.
>>>>>
>>>>> Another option I had come up with is reading back the contents of the
file, for example the server.log. This could be problematic too in that there is no way to
filter information like only see error messages or only see warning messages. To solve
this I have considered creating a JSON formatter so the results could be queried, but I
don't think it should be a default which would mean it's not reliable for the
console to assume it's getting back JSON.
>>>>>
>>>>> I've also thought about, haven't tested this and it may not
work at all, creating a handler that uses websockets to send messages. I'm not sure
how well this would work and it's possible it may not even work for bootstrap
logging.
>>>>>
>>>>> With regards to audit logging, we're probably going to have to do
something totally different from what we'll do in the logging subsystem since it
doesn't use standard logging.
>>>>>
>>>>> I guess the bottom line is what does the console want to see? Do you
want to see all raw text log messages? Do you want all messages but in a format like JSON
that you can query/filter? Do you really want only the last 10 error messages only? All or
none of these might be possible, but I really need to understand the needs before I can
explore more in depth what the best option would be.
>>>>>
>>>>> [1]:
https://issues.jboss.org/browse/WFLY-280
>>>>> --
>>>>> James R. Perkins
>>>>> Red Hat JBoss Middleware
>>>>>
>>>>> _______________________________________________
>>>>> wildfly-dev mailing list
>>>>> wildfly-dev(a)lists.jboss.org
>>>>>
https://lists.jboss.org/mailman/listinfo/wildfly-dev
>>>> --
>>>> Jason T. Greene
>>>> WildFly Lead / JBoss EAP Platform Architect
>>>> JBoss, a division of Red Hat
>>>>
>>> --
>>> James R. Perkins
>>> Red Hat JBoss Middleware
>>>
>> --
>> Jason T. Greene
>> WildFly Lead / JBoss EAP Platform Architect
>> JBoss, a division of Red Hat
>>
> --
> James R. Perkins
> Red Hat JBoss Middleware
>
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
--
James R. Perkins
Red Hat JBoss Middleware