The Lost Art of Plpgsql

One of the big features talked about when PostgreSQL 11 was released was that of the new stored procedure implementation. This gave Postgres a more standard procedure interface compared to the previous use of functions. This is perticularly useful for folks who are doing database migrations where they may have been using the standards CALL syntax vs Postgres traditional use of SELECT function(); syntax. So it struck me as odd earlier this year when I noticed that, despite the hoopla, that a year later that there was almost zero in the way of presentations and blog posts on either the new stored procedure functionality or the use of plpgsql in general.

And so I got the idea that maybe I would write such a talk and present it at PGCon; a nod to the past and the many years I’ve spent working with plpgsql in a variety of roles. The commitee liked the idea (disclosure that I am on the pgcon committee, but didn’t advocate for myself) and so this talk was born. For a first time talk I think it turned out well, though it could definitly use some polish; but I’m happy that it did help spark some conversation and actually has given me a few items worth following up on, hopefully in future blog posts.

Video should be available in a few weeks, but for now, I’ve gone ahead and uploaded the slides on slideshare.

The Ghost of phpPgAdmin

TLDR; This evening I put the final blotches on to a new release of phpPgAdmin 5.6. This release adds official support for all recent Postgres versions, fixes a number of smaller bugs, and includes several language updates. While I think upstream packagers need not worry about absorbing this release, I’ve made downloads generally available from the Github project page, or you can just pull from git to get the latest code. Note this release is designed to run on PHP 5.6.

Now for the backstory…

After much hoopla a few years back about new admin clients and talk of the pgAdmin rewrite, most of the regular contributors had pretty much moved on from the project, hoping to see a clearly better admin tool surface as a replacement. Instead, I saw multiple projects launch, none of which captured the hearts and minds so to speak, and saw the number of pull requests on an ever more abandonded looking project continue to pile up, not to mention thousands of downloads.

As for me, while not doing much publically, privately I was still maintaining two private copies of the code, one which had support for newer Postgres servers, and one which had support for PHP 7; both in rough shape. While my schedule doesn’t leave much time for random hacking, about a month ago I saw an upcoming block where I would be conferencing three weeks in a row and suspected I could probably find some time during my travels to do some updates. After a little bit of thought, I decided to do two releases. The first would add support up through Postgres 11, the most recently released version of the server software, and the second would add the aforementioned PHP 7 support. Granted, it’s taken longer than I had hoped, probably mostly because that’s how software engineering works, but also because I had to literally relearn how it is we were running this project, but I think I’ve got most of that worked out now.

I suspect the two releases might annoy some people, given that PHP 5.6 is years old and in many peoples minds EOL. But it turns out that a lot of people still run various 5.x versions of PHP, so this is a nod to that user base. If you are one of the people who has been waiting for a PHP 7 release, don’t worry. As mentioned I already have a patch set, so I’m hoping to have that completed in the next couple of weeks. Once that is released, I think it will make for a good base to start adding new features again; there is a bunch of stuff that could be added into phpPgAdmin, it’s just a matter of re-igniting the engine, so to speak.

In any case, there is life again in the old project. Long live open source.

Now With More SSL

Some of you may have noticed a minor url change, so I thought I should probably toss out a quick blog update to let you know that I’ve switched my blog to be SSL only. I’ve been meaning to do this for a bit but round tuits and all; I ended up finally making the switch as I also moved the site behind Cloudflare, which provides a number of web related services, including the aforementioned SSL coverage (for free) as well as caching services, so in theory the site should be just a bit faster for folks.

For those with a deep curiosity, this means my stack is now Octopress as a static site genarator. Those files are uploaded to Surge.sh which does static site hosting. And now Cloudfront sits in front of the site, managing DNS, providing caching, and the aforementioned SSL. I also keep a copy of the site at Heroku that I occasionally play with. This means my typical workflow is using vim for site updates, and git pushes to get things to the various places they need to go.

Return of Pagila

In early September, I gave a live demo of Postgres 10 replication at the Postgres Open conference in California. As part of the prep work, I dusted off one of my old projects… “Pagila”. For those unfamiliar, Pagila was a port of the Sakila sample database, created by Mike Hillyer for MySQL. The goal of the project was to provide a simple schema with similarities to other systems showcasing different Postgres features. Originally I hosted it on PgFoundry back in the day and for the most part it has lived there quietly, until now.

One of the reasons Postgres 10 is significant for Pagila is that Pagila contains an example partitioned table, and as many have heard, Postgres 10 contains an initial release of simplified partitioning capabilities. While not particularly useful (yet) compared to the current partitioning capabilities, for the purposes of showing off examples, it was time to update the Pagila schema to show off this new feature. And given I had to do that work, I thought maybe it was time to make a new “official” release.

So, I’ve now created a new Pagila project page on Github. Unfortunately I was unable to get a full copy of the pervious versions from PgFoundry to do a full import, but I did have some copies of older releases lying around, so I used those to recreate the history in git for past branches. This means if you need a version of the schema that works on postgres 9.x, you can checkout one of the older branches. Once Postgres 10 is finally released, I’ll go ahead an tag/branch a version 10 release to complement it. In the mean time, please feel free to play with this, and if you’d like to contribute, you can find me on the Postgres Slack Team or submit a pull request through Github.

Postgres 9.4 - a First Look

Today I gave a talk at pgcon about the upcoming features in 9.4. As the beta was just last week, I think it’s a fairly accurate representation of what should ultimately end up in 9.4. Of course, in the course of a talk I couldn’t cover everything, but I think it should give a good primer for anyone looking to upgrade.

I want to give a big thanks to Magnus Hagander and Dave Page, who did talk on earlier versions of 9.4, which was invaluable in helping me put together my own slide deck. Also thank to Michael Paquier and Hiekki Linnikagas who provided supplemental materials. Also no one could do these talks without the work of depesz; I would strongly encourage those looking for more information on 9.4 to check out his blog. Finally, I’d like to thank all of the Postgres Developers who have worked on the 9.4 release, without whom we wouldn’t have a release.

Natural Consequence

This weekend I noticed I hadn’t updated the bio on my blog. It’s a one letter change, from COO to CEO, but there’s a lot tied up in that letter. When I started at OmniTI I would never guessed that I would end up here. All I was looking for was Bigger and Badder Postgres challenges to work on. But maybe I should have seen it coming.

One of the driving factors in my career has been my desire to work on the most important part of whatever it was I was working on. Early on this lead me to web development as a means to share knowledge between team members. After awhile, I came to the conclusion that Usability and Front End web work were the most important; those were the areas that directly impacted customer/users, and was more important than the quality of the code or the systems running things on. Get that wrong and nothing else matters. Eventually I ended up doing a 180 and started focusing on databases. It wasn’t that the database was important, but the data inside those systems was the thing I determined was the most important thing for a business. You can always replace your front end, your application code, even the servers themselves, but lose the customer data, and your done.

As time went by, my thoughts changed here as well. I still think data is the one thing that is irreplaceable, but as time went on, eventually I sought out larger challenges and more responsibility. When Theo and I discussed taking the COO role 2 years ago, I had recognized that we needed someone who could work across the different groups within OmniTI and help people to achieve thier goals. It was an area I thought I could have some impact, so I stepped into the role. At the time I didn’t worry about the next step, but if you think about this philosophy of taking on the most important work, and take it to it’s natural consequence, the role of CEO should have been more obvious. Maybe not at OmniTI, but at some point it was bound to happen.

Less Alarming Alerts

A week or so ago, I gave my talk, ”Less Alarming Alerts!”, at Velocity Europe. The presentation covers several issues around the process of monitoring, alerting, and waking people up at 2 in the morning cause things break. I find that a lot of people in web operations suffer from excessive paging going on, as it’s far too easy to add checks than it is to remove them; this talk discusses some of how we approach this problem when helping people manage their operations. Special thanks to the DevOpsDC folks for letting me do a first run back in October. You can grab the slides from the Velocity site, or view them on slideshare.

New GPG Key

A few weeks ago I finally got around to making a new gpg key. My old key was created 9 years ago at OSCon, and I remember at the time picking an “extra large key size” (1024) figuring that would last me a really long time. I guess 9 years is a really long time in computers, but 1024 is no longer good enough, so a new, stronger key seemed warranted. At the moment, both keys will work, but the newer one should be used going forward. It’s already been signed by a number of folks, so feel free to grab it from a public server if you want.

Past, Present, and Pachyderm

I’m currently in Germany, having given my talk on Postgres 9.3 at PGConf DE (and last week at PGConf EU).

Prior to that, I recently gave a talk at the All Things Open conference entitled “Past, Present, and Pachyderm”. The original idea for the talk was to give a highlight of new features coming in Postgres 9.3, however we took a slightly different approach for the ATO2013 crowd, providing some history and discussion around the Postgres project, as well as taking a look at some ideas about future development and direction.

The talk went quite well, and I think really struck a good balance for speaking to a less Postgres focused crowd than the 9.3 talks I have given at Postgres specific conferences.

MariaDB and the Quest for Oracle Freedom

People really don’t like Oracle. Enough so that SkySQL just got $20 Million in funding from Intel to help it continue to build a MySQL alternative. Now personally I don’t have the hatred that a lot of people do for Oracle, but when I look at the pricing and service offerings around Oracle’s database, Solaris operating system, and even things like ATG, I know that we can offer them comperable solutions at half the price, with far better service, so I get why people want to try to find alternatives to Oracle.

But here is what I don’t get. This week I went to the All Things Open conference and while I was there, I happened to catch the tail end of a SkySQL talk on new MariaDB features. One of the features that he was describing apparently has issues if you work with MyISAM tables, so he asked how many people in the crowd used MyISAM. Not a single person raised their hand. For most database folks, this isn’t surprising; for most people doing traditional RDBMS work, you want an MVCC based system of some kind, so people using InnoDB seems like the logical choice. The problem here is, if your community is built around the idea of being free of Oracle, I think there is a problem if your user base is completely built around a technology still owned by Oracle.

So what are these investors buying with thier $20 million? If you are trying to secure the future of your database choice, I think this is a swing and a miss. Sure, MariaDB of today is better than MySQL of back then, but from a technology control standpoint, all you’ve done is buy yourself a ticket back to 2005, when Oracle first purchased Innobase and left MySQL scrambling. Any argument that you can make that the MariaDB community doesn’t have to worry about this is basically an argument for why MySQL users might as well stick with Oracle MySQL. I suppose that $20 million might buy another attempt at a new storage engine, but we’ve been down that road before, and it’s not pretty.

PS. If you’ve got $20 million and a desire to help Solaris users get free of Oracle, the OmniOS team would be happy to cash that check.