Re: [Apiman-dev] APIMAN-451 Doc
by Brandon Gaisford
Hey all,
I don’t have much time of late to work on task, but I have found some cycles here and there. I was able to hook up the liquibase “offline” mode, pretty straight forward. I did some cursory tests against H2 using the existing PR changelogs and then updating the JPA entities and then doing a diff to see the level of effort to reconcile the changelogs. The results that come back are somewhat perplexing. All the unique constraints come back and new adds, the hibernate_sequence comes back as a drop, etc, etc. Devil is in the details. :)
What do you guys think about introducing the “apiman” schema? It’s probably a good idea to get away from the default public schema. I’m not sure of your install base and what downstream affects this has. Think about it.
I’ll keep plugging away and I have time.
Best regards,
Brandon
On Jun 29, 2015, at 6:42 AM, Brandon Gaisford <bgaisford(a)punagroup.com> wrote:
>
> Hey Brett,
>
> Thanks for the feedback! Excellent find on the Liquibase “offline” mode, thank you. I’ll give it a try as soon as I can get to it.
>
> Regarding your item 3, many production operations guys will only allow vetted SQL scripts to progress onto their production systems. In my opinion, it would be unwise not to support SQL DDLs and SQL migration scripts. I like the Liquibase managed approach as well, I think some users would really appreciate it. A hybrid approach perhaps?
>
> Brandon
>
> On Jun 29, 2015, at 5:04 AM, Brett Meyer <brmeyer(a)redhat.com> wrote:
>
>> Hey guys, apologies for the delayed response! Brandon, thanks for taking a look at Liquibase. Artificer is going to need a similar strategy...
>>
>> Some thoughts:
>>
>> 1.) Although I understand the desire to lean on Hibernate's SchemaCreator, SchemaUpdater, and other tools, I'd highly advise against relying on it in production environments. (I'm one of the Hibernate ORM core devs.) The tools have to make quite a few assumption, many of which end up being less-than-ideal. Even with additional annotations (column sizes, column types, indexes, etc.), you still end up with something that needs a lot of optimization help. It's far better to control a set of manually-maintained DDL SQL scripts, as apiman and Artificer have done. Although, SchemaCreator can fit into that picture. I'll typically let it generate DDL for all my supported dialects, then update them by-hand. It helps with some of the busy work...
>>
>> 2.) As I understand it, Liquibase does *not* require a live DB. See http://www.liquibase.org/documentation/offline.html.
>>
>> 3.) What do you guys think about forcing Liquibase as the only DDL installation route, as opposed to also including the literal, most-up-to-date SQL scripts in the distro?
>>
>> ----- Original Message -----
>>> From: "Eric Wittmann" <eric.wittmann(a)redhat.com>
>>> To: "Brandon Gaisford" <bgaisford(a)punagroup.com>
>>> Cc: "Marc Savy" <marc.savy(a)redhat.com>, "Brett Meyer" <brmeyer(a)redhat.com>
>>> Sent: Thursday, June 25, 2015 7:11:50 AM
>>> Subject: Re: APIMAN-451 Doc
>>>
>>> OK that does make it clearer, thanks.
>>>
>>> On 6/24/2015 6:13 PM, Brandon Gaisford wrote:
>>>>
>>>> I don’t think I answered your question #3 very well. I’ll take another
>>>> stab at it here just to be clear. The process pretty much goes like this:
>>>>
>>>> 1) We have a set of changelogs that describe the desired end state of our
>>>> database (all the changelogs in the current folder)
>>>> 2) Given an existing database (empty for DDL creation), we ask liquibase to
>>>> update the existing database so it is in sync with the current changelogs
>>>> 3) Liquibase queries the existing database and asks, “what changeset are
>>>> you currently on?”
>>>> 4) Given the existing changeset the db is at, and the final changeset the
>>>> current changelogs are at, liquibase knows what changesets to apply to
>>>> update the database.
>>>> 5) We can instruct liquibase to not actually update the database, but
>>>> instead emit the SQL that would be used to make the update
>>>>
>>>> That’s how the migration scripts (and DDLs) would be created.
>>>> Additionally, given two databases built using liquibase changelogs, one
>>>> could diff those databases and also generate a migration SQL script.
>>>>
>>>> Hope that’s more clear. Liquibase can only diff two databases. Update and
>>>> diff operations are different.
>>>>
>>>> Brandon
>>>>
>>>> On Jun 24, 2015, at 9:30 AM, Eric Wittmann <eric.wittmann(a)redhat.com>
>>>> wrote:
>>>>
>>>>> Thanks - I think I understand now (regarding point #3). Basically the
>>>>> downside to using liquibase is that to generate the DB-specific DDLs we
>>>>> need live databases.
>>>>>
>>>>> -Eric
>>>>>
>>>>> On 6/24/2015 3:14 PM, Brandon Gaisford wrote:
>>>>>> Hey Eric,
>>>>>>
>>>>>> Regarding your queries below:
>>>>>>
>>>>>> 1) Have a look at 000-apiman-manager-api.db.sequences.changelog.xml.
>>>>>> That file contains the DBMS specific sequence creation changesets. We
>>>>>> should create a new file along the same lines to deal with indexes.
>>>>>> I’ll add that to my TODO list.
>>>>>>
>>>>>> 2) Yes, if we manage the changelogs (specifically the meta-data within
>>>>>> the changesets) correctly, we can produce db version x to version y
>>>>>> migration scripts on demand.
>>>>>>
>>>>>> 3) Currently liquibase can only compare two databases. So to produce
>>>>>> the update SQL (migration script) to go from one database version to
>>>>>> another requires two databases. Those databases would be created using
>>>>>> the appropriate changelogs for that specific database version.
>>>>>>
>>>>>> Hope this makes sense.
>>>>>>
>>>>>> Brandon
>>>>>>
>>>>>>
>>>>>> On Jun 24, 2015, at 6:29 AM, Eric Wittmann <eric.wittmann(a)redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey Brandon.
>>>>>>>
>>>>>>> Thanks for the high level doc - very informative.
>>>>>>>
>>>>>>> I have a couple of questions (I haven't looked deeply into the PR yet).
>>>>>>>
>>>>>>> 1) Where/how do we manage indexes? Currently we are adding a variety of
>>>>>>> indexes in the manual DDLs.
>>>>>>> 2) Can differential migration scripts be generated on-demand between any
>>>>>>> two arbitrary versions of apiman?
>>>>>>> 3) What files are used when generating migration scripts? I assume the
>>>>>>> changelog (xml) files?
>>>>>>>
>>>>>>> -Eric
>>>>>>>
>>>>>>> On 6/20/2015 9:37 PM, Brandon Gaisford wrote:
>>>>>>>> Here you go guys. I think this doc will help better understand the
>>>>>>>> project. I don’t have Brett’s email, so perhaps one of you could
>>>>>>>> forward along.
>>>>>>>>
>>>>>>>> Laters,
>>>>>>>>
>>>>>>>> Brandon
>>>>>>>>
>>>>>>
>>>>
>>>
>
9 years, 4 months
APIMAN-549 - Can't add a plugin when using postgres
by Brandon Gaisford
Eric,
I spent a couple hours this afternoon trying to update io.apiman.manager.test.server.ManagerApiTestServer to use an external Postgres database to run the maven tests against. I wasn’t successful and ran out of time today. I’m currently stuck as JPA is throwing “Transaction not active” errors. I was hoping I could get this done today and help with the testing/validation effort against the new DDLs, but unfortunately not. Sorry dude, I tried. I’ll pick it up again tomorrow as I have time unless you come up with a better approach.
Laters,
Brandon
9 years, 5 months
Announcement: version 1.1.5.Final
by Eric Wittmann
Hello again everyone. A new version of apiman has been released.
http://apiman.io/
The latest version is now 1.1.5.Final. You can find the release notes here:
https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12314121&versi...
This release is mostly bug fixes, particularly a couple of ugly ones
introduced in 1.1.4.Final. Of note is that our mysql and postgres DDLs
should be correct for this version.
Also note that you can now use custom implementations of various apiman
core components (e.g. the storage layer, or the metrics layer) by
contributing them via plugins! This is great if you want to customize
apiman, but you'll obviously need to know what you're doing. But, for
example, if you want to store the API Manager data in mongodb instead of
JPA or Elasticsearch, you can now provide a mongodb storage
implementation in a plugin. So there's no need to rebuild apiman or
futz with getting your code on the classpath.
-Eric
9 years, 5 months
Liquibase without a database connection
by Brandon Gaisford
The new 3.4.0 release of Liquibase expands offline support with a new “snapshot” parameter which can be passed to the offline url pointing to a saved database structure. Posted 02 JUL 2015 by Nathan Voxland [1]
Good timing! I was just trying to do a diff against a snapshot and hit an exception that the operation isn’t supported. But lo and behold, they just released support for it. Sometimes you’re lucky. :) This will be very handy.
Laters,
Brandon
[1] http://www.liquibase.org/2015/07/without-a-connection.html
9 years, 5 months
policydefs index
by Brandon Gaisford
Hey Eric,
I was just adding all the indexes from the out of date ddls in distro to liquibase and came across the one below. It looks like the PolicyDefinitionBean has changed and the policyId column no longer exists. What if anything would you like to use in its place for the index?
From: apiman_postgresql.ddl
--
-- Name: policydefs
--
CREATE TABLE policydefs (
id character varying(255) NOT NULL,
description character varying(512) NOT NULL,
icon character varying(255) NOT NULL,
name character varying(255) NOT NULL,
policyimpl character varying(255) NOT NULL,
policyId character varying(255)
);
ALTER TABLE ONLY policydefs
ADD CONSTRAINT PK_policydefs PRIMARY KEY (id);
ALTER TABLE ONLY policydefs
ADD CONSTRAINT FK_policydefs_1 FOREIGN KEY (policyId) REFERENCES plugins(id);
CREATE INDEX IDX_FK_policydefs_1 ON policydefs (policyId);
9 years, 5 months